May 29, 2007

Reconfigurable Costume

nicole1.jpg

A Platform for Interactive Performance

Leah Buechley's reconfigurable costume consists of a torso piece and an assortment of sensing appendages that can be snapped to the torso. Sensors in the appendages include muscle flex sensors, accelerometers, bend sensors and touch sensors. Sensor data is relayed to a computer, via a bluetooth module embedded in the torso, where it can be used to control or generate music, video and other multimedia content.

The costume, built using her version 2.0 e-textile construction kit, is form-fitting and stretchy. The electronic modules are kept as small as possible so they do not interfere with the dancer. The costume was used in an improvisational performance in May, 2007 to control a player piano. This performance was a collaboration with Michael Theodore, a professor in the music department, and Nicole Predki, a graduate student in the dance department. Click here for a movie (150MB).

Posted by jo at 06:37 PM | Comments (0)

May 21, 2007

Helen Thorington Interviews Miya Masaoka

cockroaches_legs.jpg

Performing Bodies, Insects and Plants

Miya Masaoka is a musician, composer and performance artist. She has created works for koto, laser interfaces, laptop and video and written scores for ensembles, chamber orchestras and mixed choirs. In her performance pieces she has investigated the sound and movement of insects, as well as the physiological responses of plants, the human brain, and her own body.

Helen Thorington: Miya, you were trained in Japanese court music as well as contemporary music and I understand have expanded on the playing techniques of the koto – first by using extended techniques, but more importantly, by building a Laser Koto. For those who don’t know, can you tell us about the koto and how you developed it? What is the Laser Koto and how does it work?

Miya Masaoka: Sometimes various events, thoughts and inspiration converge in particular ways, and evolve over a period of time, I would say this was the case for the Laser Koto. For many years I had been trying to develop ways of extending the koto electronically –and continue to do so— and along these lines I was an aritist in residence at STEIM in Amsterdam and worked with Matt Wright at CNMAT to develop ways of building an interface for real time processing and sampling using gestural controllers and other ways of capturing and modifying sound. We recorded and mapped 900 koto samples that could be accessed in various ways... Continue reading on Networked_Music_Review >>. Ask Miya questions until July 7, 2007.

Posted by jo at 11:16 AM | Comments (0)

May 10, 2007

mech[a]OUTPUT

070531P_full.jpg

koosil-ja / danceKUMIKO

The Japan Society presents mech[a]OUTPUT by koosil-ja / danceKUMIKO :: Thu-Sat May 31- June 2, 2007.

Radical New York-based choreographer/dancer/ singer song writer/ new media artist Koosil-Ja presents an electrifying multimedia dance-performance with live 3-D environment, seamlessly incorporating elements of traditional noh music and choreography from the classic noh play Dojoji. The legends surrounding Dojo-ji Temple in Wakayama, southeast of Osaka, have inspired numerous noh and kabuki plays about the vengeful spirit of a spurned woman. By juxtaposing the restrained and subtle choreography of Dojoji with 3D world imaging projected on to a large screen, the daring Bessie Award and Guggenheim Fellowship-winning artist Koosil-ja transposes the work into her own aesthetic context, creating an innovative blend of modern and traditional, digital and flesh.

The production features 3D world designed and production by Claudia Hart, 3D Interactive interface designed and performed by John Klima, live Neo Punk Music by Geoff Matters, dramaturgy by Nanako Nakajima, Pendulum & Physical Apparatus Design and Kinetic Engineering by Michael Casselli, Head Gear by Tara Webb, and Betnon-C Bainbirdge (Video Projection Super Engineering).

With:

Geoff Matters (Live Neo Punk Music and Software Design)
Nanako Nakajima (Dramaturgy)
Michael Casselli (Pendulum & Physical Apparatus Design and Kinetic Engineering)
Claudia Hart (3D Wrold)
John Klima (3D Interface & Live Performance)
koosil-ja (Concept, Dance, Video, Video pendulum, Song, Sound Installation and Costume)
Tara Webb (Head Gear)
Benton-C Bainbirdge (Video Projection Super Engineering)

Dates: Thu-Sat May 31- June 2, 2007
Time: 7:30PM
Location: 333 East 47th Street, btwn 1st and 2nd Ave. NYC
Tickets: $25/$20 Japan Society members.
Reservations: Japan Society (212) 715-1258
at JAPAN SOCIETY Ticket Information

STUDENT RUSH $12.50 Student Rush (50% off!)
Pending availability, Student Rush tickets will go on sale an hour
before showtime. Valid ID required, 2 tickets max per ID.

mech[a]OUTPUT is made possible by a commission from Japan Society; funds from American Music Center Live Music for Dance, and New York City Department of Cultural Affairs; and generous individual contributions.

Posted by jo at 03:37 PM | Comments (0)

April 15, 2007

Turbulence Commission: The Simultaneous Translator

interface_sm.jpg

ONLINE Performance Today

Turbulence Commission: The Simultaneous Translator by John Roach and Willy Whip [Requires Windows OS] LIVE PERFORMANCE: Sunday April 15; 12:00 PM EST to 3:00 PM EST [Mac users can listen via the player of their choice].

"The Simultaneous Translator" (SimTrans) is a Windows based audio interface that enables anyone to load audio streams and manipulate them in real time on the Internet. SimTrans makes the delays and fluctuations of the Internet visible and audible. The Internet becomes your collaborator as you create your mix, and the instability you usually try to avoid becomes a tool for creation. Distance and delay are manifest within the interface numerically and as a series of sliding heads; there is also a link to Google Earth where you can watch the dynamic flight of data travel between yourself and the audio source.

“SimTrans” is a 2007 commission of New Radio and Performing Arts, Inc., (aka Ether-Ore) for its Turbulence web site. It was made possible with funding from the Murray G. and Beatrice H. Sherman Charitable Trust.

THE PERFORMANCE: "The Simultaneous Translator" grew out of the artists’ live networked performance project "Simultaneous Translation," in which the delays of the internet are used to dynamically effect the live performances of geographically distant artists.

The upcoming performance will take place from 12:00 PM EST to 3:00 PM EST on Sunday April 15. Log on via http://turbulence.org/Works/simtrans.

Participants: Greg Davis (USA), Kenneth Goldsmith (USA), John Hudak (USA), Keyman (France), Lawrence Li (China), Mice69 (France), Miguel Ramos (Spain), Joe Reinsel (USA), John Roach (USA) and Willy Whip (France).

BIOGRAPHIES

JOHN ROACH doesn't consider himself an installation artist, a sound artist, or a sculptor, but prefers to think of himself as a nomad, touching down in whatever place is most hospitable to his ideas. Recent projects have been an installation at the 2B Gallery in Budapest, Hungary; a collaborative performance with objects and video at the Saint Stephen Museum in Szekesfehervar, Hungary; and a web video project called Sweet Music. He continues to work with Willy Whip on their long-standing live networked performance project Simultaneous Translation.

WILLY WHIP is a designer and teacher in hypermedia interactivity. Outside his institutional work he likes to produce mashups that fertilize his own secret garden. This personal research and development leads him on a quest for hybrids: connect this information to that information; grow new contents; release new senses. Recent activity includes projects with the artists Anika Mignotte, Reynald Drouhin, and Du Zhenjun.

Posted by jo at 11:55 AM | Comments (0)

April 10, 2007

Networked_Music_Review Launches Today!

nmr_logo.jpg

New Radio and Performing Arts, Inc. / Turbulence.org is pleased to announce the launch of Networked_Music_Review (N_M_R).

N_M_R is a research blog focused on emerging networked musical explorations made possible by computers, the Internet, and mobile technologies. N_M_R gathers data about projects, performances, composers, musicians, software and hardware. It includes interviews, articles, papers and reviews. N_M_R also provides up-to-date information on conferences, workshops, competitions, and festivals. Readers may comment on each of the blog entries and converse with interviewees. N_M_R Features:

LIVE STAGE: Online and offline events are spotlighted in real-time; INTERVIEWS: Interviewees are available for discussion via the comments. We begin with and interview with Jason Freeman. Please join the conversation beginning right now and ending on May 20; NETWORKED_MUSIC_WEEKLY (N_M_W): selected projects, artists, or events sent to subscribers weekly; NETWORKED_MUSIC_NEWSLETTER (N_M_N): a monthly newsletter sent to subscribers; GUEST BLOGGERS: Ryan Sciano is our first Guest Blogger.

If you would like to be a guest blogger, please contact us at: newradio at turbulence dot org

RSS Feeds are available for: Main page - all the posts; Live Stage; Interviews; and Net_Music_Weekly. Subscribe now.

N_M_R is supported by the New York State Music Fund established by the New York State Attorney General at Rockefeller Philanthropy Advisors. It was designed by Shual and built by Dan Phiffer.

We look forward to your comments and suggestions.

Helen and Jo

Posted by jo at 04:48 PM | Comments (0)

Turbulence Commission: The Simultaneous Translator

interface_sm.jpg

Live Performance on April 15

Turbulence Commission: The Simultaneous Translator by John Roach and Willy Whip [Requires Windows OS] LIVE PERFORMANCE: Sunday April 15; 12:00 PM EST to 3:00 PM EST

The Simultaneous Translator (SimTrans) is a Windows based audio interface that enables anyone to load audio streams and manipulate them in real time on the Internet. SimTrans makes the delays and fluctuations of the Internet visible and audible. The Internet becomes your collaborator as you create your mix, and the instability you usually try to avoid becomes a tool for creation. Distance and delay are manifest within the interface numerically and as a series of sliding heads; there is also a link to Google Earth where you can watch the dynamic flight of data travel between yourself and the audio source.

“SimTrans” is a 2007 commission of New Radio and Performing Arts, Inc., (aka Ether-Ore) for its Turbulence web site. It was made possible with funding from the Murray G. and Beatrice H. Sherman Charitable Trust.

THE PERFORMANCE: The Simultaneous Translator grew out of the artists’ live networked performance project "Simultaneous Translation," in which the delays of the internet are used to dynamically effect the live performances of geographically distant artists.

The performance will take place from 12:00 PM EST to 3:00 PM EST on Sunday April 15. Log on via http://turbulence.org/Works/simtrans.

Participants: Greg Davis (USA), Kenneth Goldsmith (USA), John Hudak (USA), Keyman (France), Lawrence Li (China), Mice69 (France), Miguel Ramos (Spain), Joe Reinsel (USA), John Roach (USA) and Willy Whip (France).

BIOGRAPHIES

JOHN ROACH doesn't consider himself an installation artist, a sound artist, or a sculptor, but prefers to think of himself as a nomad, touching down in whatever place is most hospitable to his ideas. Recent projects have been an installation at the 2B Gallery in Budapest, Hungary; a collaborative performance with objects and video at the Saint Stephen Museum in Szekesfehervar, Hungary; and a web video project called Sweet Music. He continues to work with Willy Whip on their long-standing live networked performance project Simultaneous Translation.

WILLY WHIP is a designer and teacher in hypermedia interactivity. Outside his institutional work he likes to produce mashups that fertilize his own secret garden. This personal research and development leads him on a quest for hybrids: connect this information to that information; grow new contents; release new senses. Recent activity includes projects with the artists Anika Mignotte, Reynald Drouhin, and Du Zhenjun.

Posted by jo at 09:20 AM | Comments (0)

April 06, 2007

WESTERN FRONT NEW MUSIC Presents

wfront.jpg

ALTERNATE VISIONS

WESTERN FRONT NEW MUSIC Presents the West Coast "virtual" premiere of ALTERNATE VISIONS, a new opera by Vancouver composer John Oliver and librettist Genni Gunn :: May 5, 2007 at 5pm :: The Western Front, 303 East 8th Avenue (near Main) :: Admission by donation: Free to Western Front Members :: Reservations / Information: 604 876-9343 / www.front.bc.ca

An alternate vision of love: Staged in Montreal by Chants Libres, and broadcast by a live video stream to the Western Front in Vancouver, this bilingual opera presentation is truly an encounter mediated by cyberspace. Taking the subject matter of computer dating as its focus, Alternate Visions addresses the internet¹s power to create personas and perfection. In a space where musical styles merge, real and virtual universes collide, robotics and intelligent skin mingle...what happens to human carnal relations?

Two singles develop a relationship over the Internet in a virtual 3D environment; they decide to break the ice of cyberspace and meet in person at a high-tech karaoke bar, which is in fact the set of Alternate Visions, a reality television show. At the last minute, they both get cold feet and ask their respective best friends to take their places. Why take the risk? Why change anything?

Misunderstandings multiply ad infinitum, raising questions around the perception of reality and the culture of identity. Cher internet, peux-tu me dire qui je suis?

Vancouver audiences can attend this live large screen video broadcast of the closing night performance of Alternate Visions on Saturday May 5 at 5pm at the Western Front. Please reserve seats by calling 604 876-9343 - admission is by donation at the door. The opera will be performed in English and French.

Music: John Oliver, on a libretto by Genni Gunn. Soloists: Rinde Eckert (tenor, Richard), Jacinthe Thibault (mezzo-soprano, Valerie), ÉthelGuéret (soprano, Susan), Patrick Mallette (baritone, TV host). The three Girls: Jean-François Daigneault (countertenor), Ghislaine Deschambault (soprano) and Claudine Ledoux (mezzo soprano). Musicians: The Bradyworks ensemble, directed by Cristian Gort.

Production team: Pauline Vaillancourt (stage direction), Jean Décarie (visual design and interactivity), Jean Gervais (lighting), Pascal Dufaux (stage design), Liz Vandal(costumes), Joanna Berzowska (interactive textiles), Jacques-Lee Pelletier (makeup).

This opera is co-produced with the DEII(Laboratory of Immersive and Interactive environment development), in partnership with the Hexagram Institute for Research and Creation in Interuniversity Media Arts andTechnologies, and in collaboration with Bradyworks, the Usine C and the CIAM.

Since its beginnings in 1990, CHANTS LIBRES has always presented operatic repertoire with a modern flavor and has constantly explored new techniques and approaches in the sphere of vocal art. The company works in close collaboration with creators and researchers of all disciplines. ALTERNATE VISIONS, the company's 12th operatic production, integrates state-of-the-art technologies.

For further information, contact DB Boyko, Music Curator, Western Front New Music 604 876-9343 newmusic[at]front.bc.ca

Posted by jo at 10:57 AM | Comments (0)

April 03, 2007

Furthernoise.org

cd61.jpg

April 2007

Welcome to the April issue of Furthernoise.org which features a host of new reviews on adventurous cross genre music, sound and writing. Included in this is our new net label release Appropriate Re-Appropriations which is a compilation of international musicians and sound artists compositions utilizing the Freesound Project archive. It is free to download with printed folded sleeve from Fn net label menu.

We introduce our new guest reviewer Derek Morton from US drone merchants Mikroknytes as well as podcasts of Furthernoise radio on BCfm.

We are also hosting a live performance by Derek Holzer & Rob Curgenven who will be mixing improvised field recordings and analogue synth explorations at the Blue Lagoon Cafe Bristol on 14th April from 8pm.

"Roggbif Records - Purveyors of Fine Norwegian Noise." (feature) Sten Ove Toft has been of a mainstay of Norwegian Noise for sometime and his Roggbif label is turning out some of the most interesting and aggressive music from the genre. The first time I met him however, it was a somewhat quiet affair. I had received a txt msg from someone saying "You should like this" a time and an address. Review by Mark Francombe.

"Clair / Shahar 'Avner's Arrival'" (review): John Clair and Jed Shahar record a session of 'electro acoustic improvisation'. This is news to the security guard whose venue they are in. An interesting conversation develops. Review by Mark McLaren

"Eavesdropping" (review): Blind people are not casual listeners. Blind since birth, Stephen Kuusisto recounts the surprise that comes when we are actively listening to our surroundings. There is an art to eavesdropping. Review by Mark McLaren

"Helicoids / Psychotic Breaks Mix CD - Alex Young" (review): In June 06 Alex Young created a special continuous mix for the Furthernoise Month Of Sundays live A/V event in London. Combining tracks from two of his releases, psychotic breaks and Helicoids. The result was a thirty-minute trip into frenetic beats, glitch textures, flowing synthesizer ambience and melodic tones. Review by Bill Binkelman

"Sonic Postcards - Sonic Arts Network" (review): Sonic Postcards is a UK wide education project devised and delivered by Sonic Arts Network. It is a unique and innovative project that enables 9-14 year old pupils from across the UK to explore and compare their local sound environments through the exchange of sound postcards with other schools via the internet. Review by Roger Mills

"SuperTexture by Gary Smith" (review): SuperTexture throws away post-processing and effects, leaving just Gary Smith, his guitar and amp to take us on a journey of virtuoso guitar experimentation. The first disc, Smith's solo work, is pure guitar and amp. The second disc features guests exploring each improvisation through their own interpretation. Review by Alex Young

"Terraform EP - Razing Darkness" (review): The journey begins with ominous bass drones and industrial machine noise in the track Gravity Damage. Its evident that we are exploring planetary territories through alien droid transmissions. Headphones are recommended if you prefer to maximize the dizzying array of stereo panning and acoustic mind fuck. Review by Derek Morton

Roger Mills
Editor, Furthernoise

Posted by jo at 12:31 PM | Comments (0)

February 28, 2007

Erik Nelson + Doo Yul Park

06.jpg

Sonorous + Sonanos

Abstract: Sonorous is an experiment in game design which aims to explore the possibilities of using music and sound as an alternative to traditional story-like narrative as the primary means of creating context for gameplay, and facilitating player engagement. Sonorous takes the form of a shortened Real Time Strategy game, where each unit the player controls takes on the properties of a musical instrument in the player's own temporary musical ensemble group. Through player choices and actions, the units will dynamically create and alter the musical score. The primary experiential goal of the piece is to evoke a grooving, or joyful dancing reaction in the player. The primary research goal of the project is to identify several tenets of Interactive Sound Design as they relate to games, in the hope of unveiling new game mechanics and systems which could be applied to arcade style and casual video games.

Keywords: Sonorous, Interactive Music, Musical RTS, Game Audio Synchronization, Game Event Synchronization, Music/Narrative, Interactive Sound Design, Grooving RTS

sonanos.jpg

SONANOS: A Musically Structured World

Abstract: SONANOS is an application for interactive visual music. In this PC software, the user simultaneously composes music and constructs objects in a 3D environment. Combining objects generates rhythms, melodies, and phrases, which are determined by the visual properties of the procedurally generated objects. In the virtual environment, users can manipulate both the visual and musical structures. As a result, the user character evolves. The user character transforms, which represents the development of the composition. SONANOS adheres to the aesthetic conventions of the visual music genre, in which abstract graphics correspond to the rhythm. In this project, interactivity amplifies the synesthetic experience by enabling self-expression and deepening the connection between graphics and music.

KEYWORDS: SONANOS, synesthesia, interactive visual music, music video game, art game, procedural/generative/algorithmic graphics/ music/animation, music creation/composition game, musically structured world.

Posted by jo at 04:00 PM | Comments (0)

February 26, 2007

Ophonine Pophorn

0bogotrax.png

Phone as Musical Instrument

Paving the way for more news about Bogota...

Erik Sandelin and Magnus Torstensson from Unsworn have presented at the Bogotrax festival in Bogotá their new Ophonine Pophorn software. The Ophonine is the first in an upcoming series of applications that transform your mobile phone into various musical instruments. With the Ophonine you can record and play sound loops with a press of a button.

First you have to download the Ophonine Pophorn software which will be available for free in May 2007. After that it works like this: press and hold the button to record a sound loop, using the microphone of the handset. When you let go of the button the loop is played back repeatedly - through the phone speaker or via an audio cable - until another sound has been recorded. Demo.

The Ophonine Pophorn is based on Unsworn’s installation, the Four Ophones.

After Bogotrax (which ends today) Unsworn will continue its Colombian Pophorn Tour with events in Medellin, as part of the Pixelazo festival.

Top image by Bogotrax' photos. More mobile phone as a musical instrument: musical tones played depending on how far away the hand is from the camera; the HandyDandy, Dialtones, Sinfonietta ringtone concert. [blogged by Regine on we-make-money-not-art]

Posted by jo at 06:13 PM | Comments (0)

February 19, 2007

Amebeats

AMEBEA~1.png

Music production through haptic interface

Amebeats is a project by Melissa Quintanilha that allows “people to mix sounds by manipulating physical objects instead of twisting knobs or clicking on a music production software“.

The amoeba shaped board has little boxes in its center that when moved to the arms, activate different sounds. My interest in music and design merged to create a haptic interface (based on touch) that allows people to use gesture to mix sounds with their hands. My inspiration for this robotic installation came from going to parties and seeing DJs create the music on their tables, but no one knowing what they do to make the sounds. Generating music using gesture allows for a much more expressive way of creation. [blogged by Nicolas on pasta and vinegar]

Posted by jo at 06:17 PM | Comments (0)

February 05, 2007

4th International Workshop in Mobile Music Technology 2007

music.png

Call for Proposals

MOBILE MUSIC TECHNOLOGY :: FOURTH INTERNATIONAL WORKSHOP :: AMSTERDAM, THE NETHERLANDS, 6-8 MAY 2007 :: Submission deadline: 12th March 2007.

Combining music and mobile technology promises exciting future developments in a rapidly emerging field. Devices such as mobile phones, Walkmans and iPods have already brought music to the ever-changing social and geographic locations of their users and reshaped their experience of the urban landscape. With new properties such as ad hoc networking, Internet connection, and context-awareness, mobile music technology offers countless new artistic, commercial and socio-cultural opportunities for music creation, listening and sharing. How can we push forward the already successful combination of music and mobile technology? What new forms of interaction with music lie ahead, as locative media and music use merge into new forms of everyday experiences?

This series of annual workshops began to explore and establish the emerging field of mobile music technology in 2004. This fourth edition of the Mobile Music Workshop in 2007 offers a unique opportunity to participate in the development of mobile music and hands-on experience of cutting-edge technology.

This year’s workshop is hosted by STEIM and Waag Society in Amsterdam, The Netherlands, and partners with the Futuresonic Festival in Manchester, England, taking place later the same week. The programme of the workshop will consist of keynote presentations from invited speakers, peer-reviewed paper presentations, poster sessions, in-depth discussions about the crucial issues of mobile music technology, demos of state-of-the-art projects, break-out sessions and live events. Registered participants will take part in hands-on sessions conducted by leaders in the field. In addition to traditional presentation sessions, the programme includes events open to a general audience, facilitating the presentation of artworks and technological breakthroughs to a wider public.

The Mobile Music Workshop sets the stage for a collaboration that brings together leading institutions in both experimental electronic music and mobile media. STEIM (the studio for electro-instrumental music) is a centre for electronic music production well known in the performing arts. STEIM promotes the idea that Touch is crucial in communicating with electronic and digital arts technologies, a vision that over the years has given birth to physical, sensor-based musical instruments. Waag Society is a research and development institute in the fields of networked art, education and creative industries. Waag develops platforms for artists to reach society through networked collaboration, media streaming, and locative media.

CALL FOR PARTICIPANTS AND WORKS

We invite practitioners, artists, designers, hackers and researchers from all areas, including music, technology development, new media, sound-art, music distribution, cultural/media studies, locative media and industry to submit work and register to attend. Don't miss this chance to help shape the mobile music landscape of the future! Participants are encouraged to submit their work in mobile music technology to the categories below. The partnership with the Futuresonic Festival allows those coming to Europe to make a single trip to attend both events.

* Papers

We invite submissions of workshop papers presenting new projects, approaches or reflections exploring the topic of mobile music. Potential submissions could include but are not limited to mobile music systems or enabling technologies, interface design, legal issues, user studies, ethnographic fieldwork, social implications, art pieces and other areas relevant to mobile music.
Accepted paper authors will be given a time slot during the workshop for presentation and discussion of their work. Format: up to 8 pages in ACM SIG publications format (shorter papers welcome). For templates, see http://www.acm.org/sigs/pubs/proceed/template.html

* Posters

We also invite the contribution of posters that document work-in-progress projects or ideas in similar areas of mobile music technology as the papers.
Posters will be on display during the duration of the conference. We will arrange a poster presentation session where attendees will be able to discuss the works with the authors. Format: 2 pages in ACM SIG publications format

* Demonstrations

We also invite submissions of work to the demo category. Besides encouraging paper and poster presenters to bring a demonstration as a complement their presentation, we encourage submissions of stand-alone demos of mobile music systems or enabling technology. Their implementation should be ready enough to be demoed, and will possibly be shown to the general public during open sessions depending on their robustness. Format: 2 pages in ACM SIG publications format.

SUBMISSIONS

Please email your submission as a PDF file in the appropriate format to submissions[at]mobilemusicworkshop.org In the subject line, state MMW SUBMISSION followed by PAPER, POSTER or DEMO and the name of the main author. Submissions will be peer-reviewed by a committee of international specialists in the fields of mobile music, interactive music, and locative media.

DEADLINES

Submission deadline: 12th March 2007
Notification of acceptance: 2nd April 2007
Registration deadline: 16th April 2007
Final submission deadline: 16th April 2007

REGISTRATION & FEE

This year’s workshop will have both closed sessions for registered participants and sessions open to the general public. The number of participants for the closed sessions of the workshop is limited to 50 places. Accepted submitters are given priority, other participants are accepted on a first-come first-served basis. Registered participants will have automatic access to all sessions of the workshops. The closed sessions of the workshop will be charged both a regular and a reduced student fee, similar to the last edition’s fees.

Registration deadline: 16th April 2007

The open sessions will be advertised in more detail closer to the event. The fee for the open sessions will be event-based. Scheduling and registration fees will be coordinated with Futuresonic to allow participants to easily attend both events.

ORGANISERS

* International Steering Committee

Atau Tanaka (Sony CSL Paris, France)
Frauke Behrendt (University of Sussex, UK)
Lalya Gaye (Viktoria Institute, Sweden)

* Local Organising Committee

Kristina Andersen (STEIM, The Netherlands)
Robert van Heumen (STEIM, The Netherlands)
Ronald Lenz (Waag Society, The Netherlands)

MORE INFORMATION

For more information about the previous and up-coming workshops, the ACM SIG publications format as well as travel and accommodation information, please consult: http://www.mobilemusicworkshop.org/

Posted by jo at 11:13 AM | Comments (0)

January 24, 2007

Joe Winter

FANTAS~2.png

Fantasie no. 1 for Mobile Pianos

Fantasie no. 1 for Mobile Pianos is a kinetic sound and video installation. Five modified toy pianos circulate the gallery on five elevated wooden tracks. Each piano is equipped with small speakers to play amplified sounds picked up from conductive tape on the tracks. Each piano also contains small motors which intermittently spin to strike the toy pianos' tone bars, adding acoustic elements to the overall soundtrack of the work.

The installation also features five video monitors, which display images of five different real landscapes to compliment the abstract landscape of the wooden tracks. As the pianos begin to move around the gallery, these video landscapes become populated by a seemingly infinite stream of mobile pianos.

distributed2.png

Distributed Performance

Distributed Performance takes place in multiple sites throughout a single building. Each site, or 'node,' is comprised of installed speakers, wire, various amplifiers, microphones, and a computer. All nodes are interlinked, both by audio cable and via ethernet. Each computer features the same display: an abstracted map of the space and the nodes within it, through which audio materials flow. When visual representations of audio flow within the spatial boundaries of a particular node, it is sonified there. The performer's presense at a particular node will serve to attract audio material to that location and to slowly organize it into rhythmic structures. Simultaneously, the programming may usurp the performer's control and direct the flow of materials elsewhere. The display may then serve as a map--both to the performer and the audience--to direct the flow of the performance from node to node. More >>

Posted by jo at 03:57 PM | Comments (0)

NASDAQ Vocal Index

OLA.png

Vocal Interpretations of the Market

A vast amount of information in the shape of index graphs is constantly being created at the different stock markets of the world. In the project NASDAQ Vocal Index the graphs of companies listed on the NASDAQ are transformed into music. Computer software converts the graphs to sheets of music, and the scores are presented online on a projection screen, to be read and sung by a member of a local choir. Every company has got its own voice and together the choir vocally interprets the current situation on the market.

The project has among other places been performed in Tirana Biennale, Yamaguchi, Japan and at Färgfabriken, Stockholm.

Posted by jo at 10:04 AM | Comments (0)

RPM's Remixed

funksoup.png

A Virtual Space "Sings"

RPM's Remixed is a telematic, transdisciplinary performance based on remixing Alvin Lucier's RPM's score – integrating dance, video and sound improvisation between artists in New York, Tampa, and San Jose.

Alvin Lucier, a well-known composer of music and sound installations that explore natural phenomena and resonance, is renowned for making spaces "sing." This piece explores the possibilities of using one of his scores to make a virtual space "sing" by using improvisational techniques as well as the natural feedback and delay created by streaming.

Themes exploring isolation, intersection, and madness reverberate through images, body and sound. Dancers and sound artists in San Jose, Tampa, and New York collaborate while realtime processing of the video images is driven partially by the performers’ movements. Motion analysis is sent over the network to trigger aspects of video manipulation, further blurring the lines of authorship and contributing to the impromptu chemistry.

Lucier’s original “RPM’s” score was a tongue-in-cheek take on creating sound based on depressing and releasing the accelerator of an Aston Martin engine. The score itself is a series of nonsensical curvatures and dashes – nonsensical, that is, until a personal interpretation of each written gesture is applied.

Taking this a step further, the cast of RPM’s Remixed deconstructs the score, into dance gestures, violin strokes, guitar riffs, and sound and video mashing. The challenge is not only to collaborate within one’s own medium but to improvise successfully between mediums and within a virtual space. Can the magic of improvisation reach across the ether? [via Rhizome]

Posted by jo at 09:33 AM | Comments (0)

January 23, 2007

Tangible Musical Interfaces

MIT_patten_audiopad.jpg

Music Tables

This overview on tangible musical interfaces is focussing on table-top controllers, such as music tables and audio building blocks, mostly chosen for their relation to the reacTable* project. It also includes a few additional projects, which we think are somewhat closer to the concept of tangible interfaces rather than simple musical controllers. This page is maintained by Martin Kaltenbrunner. Please report any suggestions, updates or changes to mkalten_at_iua_dot_upf_dot_edu. If you are looking for more non-musical table interfaces, you could have a look at this collection at pasta & vinegar.

1. Music Tables with Tangibles: audiopad

download videoauthor(s): James Patten, Ben Recht, Hiroshi Ishii
• affiliation: MIT Medialab, Tangible Media Group
• publication: Audiopad: A tagged based Interface for Musical
Performance [PDF]
• website: http://www.jamespatten.com/audiopad

• description: Audiopad is a composition and performance instrument for electronic music which tracks the positions of objects on a tabletop surface and converts their motion into music. One can pull sounds from a giant set of samples, juxtapose archived recordings against warm synthetic melodies, cut between drum loops to create new beats, and apply digital processing all at the same time on the same table. Audiopad not only allows for spontaneous reinterpretation of musical compositions, but also creates a visual and tactile dialogue between itself, the performer, and the audience.

Posted by jo at 03:39 PM | Comments (0)

January 17, 2007

The Heart Chamber Orchestra

0AORCH~1.png

Music "From the Heart"

The Heart Chamber Orchestra is a one-hour performance that literally creates music "from the heart". The orchestra consists of 12 classical musicians of the Trondheim Sinfonietta and the artist duo TERMINALBEACH (made of Pure and Erich Berger.) Using their heartbeats, the musicians control a computer composition and visualization environment. The musical score is generated in real time by the heartbeats of the musicians. They read and play this score from a computer screen.

The musicians and artists are equipped with ECG sensors. A computer monitors and analyzes the state of the 12 hearts. A software analyzes the data and generates via different algorithms the real-time musical score for the musicians, the electronic sounds and the computer graphic visualization.

While the musicians are playing, their heartbeats influence and change the composition and vice versa. The musicians and the electronic composition are linked via the hearts in a feedback structure. The music is the expression of this process and of an organism forming itself from the circular interplay of the individual musicians and the machine.

Videos.

Related: brain-orchestra; Sensors_Sonics_Sights. [blogged by Regine on we-make-money-not-art]

Posted by jo at 07:12 AM | Comments (0)

January 05, 2007

Fantastic Piano

fantpiano.png

Orchestrating an Audio-Visual Dance

Fantastic Piano is an interactive installation created by So-young Park, Laurel Boylen, Shin-Yi Huang and Cho Rong Hwang from ITP (The Interactive Telecommunications Program) at NYU. The installation was presented at last year’s (2006) ITP Winter Show on December 17 and 18.

Fantastic Piano is a musical instrument that makes a mystic and beautiful atmosphere when played by a user. The installation consists of six plexiglass cylinders filled with water and glitter. Six water pumps control the motion of the water and the glitter. The user interacts via six infra-red sensors placed infront of the cylinders. When a user approaches the Fantastic Piano and moves his or her hands through the air in front of the transluscent tubes filled with water and glitter, the infra-red sensors senses this movement and trigger movement of the water and the playing of various sounds. This way, the user not only orchestrates a musical composition, but also a visual dance as the water and glitter too responds to his or her movements.

Check out the video clips one and two.

More projects from the ITP Winter Show 2006 will be posted in the following days. [posted by Lene Mailund on Digital Experience]

Posted by jo at 06:45 PM | Comments (0)

December 22, 2006

Rock Stars (And Proteins, Too)

STELLA~1.png

How two groups of scientists coax music from nature

"From Led Zeppelin to Wolf Parade, rock music owes a debt to science—and to the scientists whose fascination with sound art spurred them to create new noises. A Russian physicist named Leon Theremin developed the world's first electronic musical instrument: a box with antennae (appropriately called a "theremin") that used electric circuits to create a range of otherworldly sounds. Four decades later, in 1964, an engineering physics Ph.D. named Robert Moog invented the synthesizer that bears his name. Now, a new generation of scientists and musicians continue to push the frontiers of musical possibility.

Since 2003, Hungarian astrophysicist Zoltán Kolláth and composer Jenő Keuler have been working on what they call the "Stellar Music Project," using pulse patterns generated by stars to compose musical pieces..." Continue reading Rock Stars (And Proteins, Too) by Lydia Fong, Seed.

Posted by jo at 07:07 PM | Comments (0)

December 08, 2006

ToneLadder

ladder2.jpg

Sounding Balance

ToneLadder is a household ladder extended with a melodic function and made into a musical instrument. Stepping on a bar of the ladder creates a sound or tone which is different from rung to rung. This way, the ToneLadder can be used as a musical performing device. Most electronic music is performed quite similar using keyboards and laptops. ToneLadder offers and alternative way of performing music by letting the artist trigger sound loops by balancing on the ladder.

The technical implementation consists of special sensors attached close to the top of each bar of the ladder. If one of these sensors is triggered by a foot or hand, an analog signal is sent to a processing unit which transforms the incoming analog signal into a MIDI signal. This signal is then either sent to a computer or processed directly by a hardware music sampler. The sent signal triggers a specific sound loop or tone for each bar of the ladder. By hitting the bar soft or hard, the user can influence the volume and the modulation of the sound as well. ToneLadder can also be used to trigger live video streams.

ToneLadder is available in a basic version, a dynamic ladder, and a wireless wall-ladder. Check out the demo video and a video of the wireless wall-ladder. ToneLadder is exhibited at this years Nordic Exceptional Trendshop in Copenhagen. [posted by Lene Mailund on Digital Experience]

Posted by jo at 08:40 AM | Comments (0)

November 22, 2006

Cobi van Tonder's

skatesonicv.png

Skatesonic

Skatesonic, uses the motions and sounds of skateboards and explores their inherent ambient rhythm to create music. In a way, each move translates to musical parameters and the rider ends up skating through a landscape of music (which s/he influences over time). Related post >>

Posted by jo at 03:54 PM | Comments (0)

November 14, 2006

Volume Over Lumen

lumvvvv.png

Rhythm Communicator

Volume Over Lumen is a rhythm communicator for several people. Each participant wears a silicone collar which sends a rhythmically arranged sound. A particular sound is generated for each participant according to an analysis of their voice. LEDs in the collar create a halo of pulsating light that radiates around the wearer. If another partecipant comes within the range of the collar, he or she can hear the sound generated by the collar worn by the first person. But if he or she comes within the range of several collars, then it’s a mix of the various sound compositions that will be heard.

By their own movement in the area participants can modulate their own sound experience, as well as the acoustic experience of the other collar wearers. Electronics are visibly cast in the collar: resistances, transistors, diodes, conductive strips and other elements have a functional as well as an aesthetic role. A work by Martin Bellardi and Anne-Christin Delakowitz. [blogged by Regine on TWENTY1F]

Posted by jo at 09:13 AM | Comments (0)

November 10, 2006

Toriton Plus:

watermusic.png

Water Surface as Music Controller II

A slightly more complex version of using a water surface as a music controller featuring five lasers instead of just the one. Please note that this is a prototype version, and as such does not represent the variety of output for the finished thing.

Posted by jo at 04:38 PM | Comments (0)

BrainWaves

mea.png

Musical Performance by Cortical Cells

BrainWaves is a musical performance by cultured cortical cells interfacing with multielectrode arrays. Eight electrodes recorded neural patterns that were filtered to eight speakers after being sonified by robotic and human interpretation. Sound patterns followed neural spikes and waveforms, and also extended to video, with live visualizations of the music and neural patterns in front of a mesmerized audience. See a two minute video here (still image below). Teams from two research labs designed and engineered the project; read more from collaborator Gil Weinberg.

BRAINW~1.png

The Potter Group of the Laboratory for Neuro-Engineering at the Georgia Institute of Technology conducts research with cultured neuro-electrical interfaces, using recording and stimulation techniques. BrainWaves is one creative expression of this work, another is the MEART Animat, or hybrot, that creates paintings with robotic arms. The Animat used rat neurons; mouse neurons for BrainWaves, and simple fish neurons in Fish and Chips (a predecessor of BrainWaves). Organic and digital meshed.

BrainWaves invokes aesthetics of sound patterns with neural patterns. MEART "the semi-living artist" is abstract in its output. What do these expressions mean? Their beauty lies in science, in the synchronized creative outlets in live, improvised performances with digital and audience interfaces generating compositions in reaction to neural activity. Then putting them online. Two, including BrainWaves, were documented in Interactive Sonification of Neural Activity, Weinberg & Thatcher, from Proceedings of the 2006 International Conference on New Interfaces for Musical Expression. (PDF)

Potter recently published on plasticity in neuronal cultures, and earlier in the year co-authored An extremely rich repertoire of bursting patterns during the development of cortical cultures. Wagenaar, Pine, Potter; BMC Neurosci. 2006 Feb 7;7:11 (free full text).

The paper includes an open invitation. "Indeed, the range of behaviors of these cultures is so rich that this paper can only begin to describe the diverse activity patterns present in these recordings. Therefore, we invite others to join us in the study of activity patterns of networks of cortical cells in vitro. To this end, we have made the entire dataset used for this paper available on the web [email Potter for access]. Researchers may download our recordings of spike waveforms."

Imagine what sci-art expressions could develop in the research and infoviz from that. [posted by Sandra on Neurofuture] [via]

Posted by jo at 04:09 PM | Comments (0)

Onliness v1.0.1

20061110.gif

Open Source Music

When one speaks of 'open source culture,' they are typically referring to the social conditions created in an atmosphere of sharing. Open source software is not only freely available, its source code is also revealed to encourage collective authorship, improvement, and personalization. The increasing popularity of such publishing contexts has led to a broader movement of activists, coders, lawyers, artists, and others questioning contemporary copyright law, and challenging the 'code' that prohibits collaboration. American sound artist Demon Doctor carries these concerns over into his work, equating music with language and sounds with letters, ultimately asking 'who owns the alphabet.' For his new album, 'Onliness v1.0.1,' the artist sampled public archives of ethnographic recordings, found phonography, and film scores, and reprocessed them using analog and digital synthesizers to create twenty-one new brick-hop and trancehall tracks. 'Onliness' premieres at Boston's Samson Projects on Saturday, November 11th, where Demon Doctor will collaborate with DJ Spase1, in a series of live interpretive mashups, carrying the open source ideal into a performative realm. Needless to say, the entire album is freely downloadable, so readers can remix the tracks for themselves. - Marisa Olson, Rhizome News.

Posted by jo at 10:51 AM | Comments (0)

November 08, 2006

GypsyMIDI

5091_2~1.png

Body as Musical Instrument

The Gypsy MIDI controller turns the human body into a musical instrument, gizmag, January 26, 2006.

Dance and music go together. Intuitively, we know they have common elements, and while we cannot even begin to understand what they are or how they so perfectly complement one another, it is clear that they are both are an expression of something deep and fundamental within all human beings. Both express things that words cannot – beyond intellect, they are perhaps two of the fundamental building blocks of human expression, common to the souls of all people. Which is why when we saw this machine which links the two, we knew there was something special brewing. The GypsyMIDI is a unique instrument for motion-capture midi control – a machine that enables a human being to become a musical instrument - well, a musical instrument controller to be exact, or a bunch of other things depending on your imagination.

Most importantly, the entire package is commercially available with extensive customisation features so that you can decide what each movement triggers – a colour, a sound, or perhaps something else again – anything that can be controlled by a digital interface. The set-up and operation is simple, intuitive and quick and the possibilities for performance art and musical applications are … landmark. One arm costs UKP480 (US$855), the whole MIDI suit costs UKP940 (US$1675), and the whole shebang (MIDI Suit, Wireless Interface, Tripod Stand, interface software, Manuals & Videos CD) goes for UKP1240 (US$2210) … that’s the total price for beginning work in new dimension. Like we said … landmark

The suit is modeled on the human skeletal form using rotational sensors at the joints. The GypsyMIDI simply plugs into a MIDI interface and arm movements are converted into a real-time stream of MIDI data. The mapping interface eXo-software allows the user to define how the movements are translated into MIDI control, including the ability to trigger notes, generate continuous control commands or even play scales.

Software included with the suit lets the user control any MIDI-enabled program including Cubase, Live, Logic Audio, ProTools, MotionBuilder, Reason, Traktor DJ Studio and any VST instrument or effect. Real-time control of sliders, cross-faders, and buttons allows many parameters such as volume, filter cut-off & resonance to be manipulated instantly.

The concept for the Midi suit started to evolve in San Francisco late ‘90s dance scene. Seeing how the body expressed music through dance lead us to experiment with the existing Gypsy Mocap system designed for 3D animation. The company wanted to explore the possibility of orchestrating and composing music for real time performance through body movements and dance. This was the beginning of the discovery of a diverse multimedia instrument that promises to add new dimensions to live performance for visual artists, DJs and musicians for years to come.

Now artists can have the advantage of a body instrument that allows music authoring in real time performance. This Mocap Midi controller suit translates body movements into sounds, loops, lights and visuals, completely merging performers and their art and enabling a wide range of musical and visual applications. [via new media : ryan peter andre tobin]

Posted by jo at 12:18 PM | Comments (0)

November 07, 2006

New Musical Experiences

reactable.png

Reactable

Fantastic next-gen music-making interface - somewhat similar to Jeff Han's Multi-touch interface, which imploded all the interaction design mailing lists a few months back - and with some genuine possibilities for the kind of tactile, embodied instrument interfaces I suggested at the end of New Musical Experiences. [posted by Dan Hill City of Sound]

Posted by jo at 12:30 PM | Comments (0)

October 31, 2006

Synthecology

CWALL_04.png

A Tele-Immersive Collaboration

Synthecology combines the possibilities of tele-immersive collaboration with a new architecture for virtual reality sound immersion to create a environment where musicians from all locations can interactively perform and create sonic environments.

Compose, sculpt, and improvise with other musicians and artists in an ephemeral garden of sonic lifeforms. Synthecology invites visitors in this digitally fertile space to create a musical sculpture of sythesized tones and sound samples provided by web inhabitants. Upon entering the garden, each participant can pluck contributed sounds from the air and plant them, wander the garden playing their own improvisation or collaborate with other participants to create/author a new composition.

As each new 'seed' is planted and grown, sculpted and played, this garden becomes both a musical instrument and a composition to be shared with the rest of the network. Every inhabitant creates, not just as an individual composer shaping their own themes, but as a collaborator in real time who is able to improvise new soundscapes in the garden by cooperating with other avatars from diverse geographical locations.

Virtual participants are fully immersed in the garden landscape through the use of passive stereoscopic technology and spatialized audio to create a networked tele-immersive environment where all inhabitants can collaborate, socialize and play. Guests from across the globe are similarly embodied as avatars through out this environment, each experiencing the audio and visual presence of the others.

MAIN-R~1.png

Participants from the WWW use a browser interface to contribute sound elements to the garden environment for use as compositional items. All the while, this real-time composition is streamed through web broadcast of the virtual environment to illustrate the audio-visual transformation of the garden. Broadcast throughout the entirety of the festival, Synthecology will celebrate the possibilities of collaboration, improvisation, and distributed authorship that exist on the horizon of an increasingly interconnected world.

As current advances in networking become commonplace, the creation of collaborative environments connecting remote individuals will become less involved. By augmenting the possibilities for users to share sensory presence through tele-immersive interfaces, Applied Interactives intends to combine the possibilities of real-time collaboration and socialization with the dynamics of digital creation and manipulation. Synthecology is a speculative glance at how the technology of today may be utilized to create new autonomous zones for sampling & re-mixing culture.

COLLABORATORS

Synthecology is being created as a collaboration of students and faculty from the Electronic Visualization Laboratory at the University of Illinois at Chicago, The School of the Art Institute of Chicago, and Columbia College Chicago, and art(n) through the Applied Interactives organization.

ABOUT APPLIED INTERACTIVES

The purpose of Applied Interactives, NFP is to educate the art and science community about the medium of Virtual Reality as an interactive, computer-generated, immersive computer graphics environment. Applied Interactives, NFP plans to advance the medium through research and experimentation as well as provide a bridge to bring the technology out of institutional labs and into more publicly accessible arenas. Applied Interactives, NFP intends to propagate the medium by providing support and direct access to the resources necessary for artists and scientists to exhibit and develop works in the medium.

Posted by jo at 03:44 PM | Comments (0)

October 18, 2006

eyeSpace

joyce_01.png

Merce + Rouse + iPod + Joyce

Last week, audience members enjoyed a unique experience in maverick choreographer Merce Cunningham’s newest piece, eyeSpace. Using iPod Shuffles, each member of the audience heard different pieces of composer Mikel Rouse’s score, International Cloud Atlas. The audience was provided with iPod shuffles, set to play the score’s tracks in a random order, giving each viewer a distinct viewing experience.

Audience members were requested to bring their own iPods, loaded with the score they downloaded from this page, to play during the performance of eyeSpace. For those who didn't have their own iPods, iPod Shuffles were available for use on loan (at no cost) at The Joyce Theater.

With your ticket purchase, audience members were also entitled to receive a free download of select tracks from Mikel Rouse’s score. [via Great Dance]

Posted by jo at 06:56 PM | Comments (0)

October 17, 2006

undersound + unspoken_series

0UNDER~1.png

Interactions Under the City

undersound is an interface that allows you to listen to, distribute and affect the flow of music on your mobile phone while you're travelling in the underground.

The project imagines that you will be able to add music to the system at upload points in the ticket halls, and download tracks on the platforms. Because of the architectural configuration of the stations undersound users would have to congregate at certain locations for the purpose of interacting with the system.

Each track in the system is tagged with its place of origin (the station where it was uploaded) and this information is visible as the track is being played. This may trigger musings around your personal relationship to that place.

While in the carriages of the tube, you can browse undersound music of other people in range (that idea was inspired by tunA). You can see when a track has been in the system, the number of times it has been played, the number of people who have played it, etc. You can browse through other's tracks anonymously, but if you decide to download a song from someone else an alert will be triggered on their phone letting them know that you are grabbing one of their tracks.

A project by Arianna Bassoli, Johanna Brewer and Karen Martin.

Via coin-operated.

255717~1.png

Another nice art project for the metro:

In unspoken_series, performers done a custom-made construction vest embedded with red and green LEDs to permit the display of up to 24 characters.

The artist programmes messages on the front and back of the vest, revealing facts, thoughts or ideas that the wearer would otherwise not pronounce in public. For example, the vests worn in the metro could display at once humorous and confrontational messages such as: “Took my seat...I am not happy!” or in an art gallery it could be "This is crap! ...are you happy?”

People who happen to be in the metro become the audience. The vests become the starting point of conversations about issues around the specific city the wearer is traveling to or from, about the main target audience he or she is exposed to, and general concerns in public space.

Video.

A work by Hoyun Son whose website also features a very funny parody of Speech Recognition Technology (video).

Currently on view at Sartorial Flux, a show curated by Valerie LaMontagne, at the A + D Gallery in Chicago until October 21st.

Deeper under the ground: For sale: Britain’s underground city; Tokyo's underground farm; Frame-by-frame underground ads; Cast Off stages knit-ins on the London Underground, occupying a carriage and knitting around the Circle line; Locker art show; parasite and Chiho Aoshima in the London underground. [blogged by Regine on we-make-money-not-art]

Posted by jo at 11:56 AM | Comments (0)

October 16, 2006

Oppera Internettikka - Protection et Sécurité

fear.png

FearingS

What is fear? Why should you be afraid? What, who is to fear? Where does your fear come from? Participate in creating a collective voice about "fear". Help reveal its actual tendencies.

FearingS is a part of the project Oppera Internettikka - Protection et Sécurité by Annie Abrahams & Igor Stromajer.

Oppera Internettikka - Protection et Sécurité explores the poetics of a contemporary sound form -- live opera as a sound event for the audience in the form of a live internet audio broadcasting. In that way it combines the notion of the world wide web communication protocols and classical artspace -- an opera house. Opera is a very strictly coded form of art with a lot of passion, and internet is a lonely place of solitude and intimate communication which is becoming more and more fragile, dangerous and suspicious.

The theme of the project is security and protection, which refers both to private and personal safety issues in the present post-WTC world of high and intense but questionable security measures and to the internet as a global and therefore extremely vulnerable and unstable communication tool.

Three artists with specific roles (artist Annie Abrahams as mutter courage, intimate mobile communication artist Igor Stromajer as bigbrother and professional opera singer as French secret service agent) are performing a complex structure combined of sounds, voices, noises, internet audio files and sound-manipulating machines and are using these sounds to perform/sing/reproduce a story for the audience present in the opera house, and for those following the audio part of the project live on the internet worldwide.

The libretto for the opera is composed out of three sources -- mutter courage is using the texts written and proposed in advance by the online visitors, bigbrother's only lyrics comes from a web search engine which he is manipulating in realtime, and the secret service agent is singing the HTML source code (Hypertext Markup Language - the authoring software language used on the internet's world wide web for creating world wide web pages) and Java scripts (JavaScript is a script language - a system of programming codes that can be embedded into the HTML of a web page to add functionality) from specifically selected French secret service web sites, openly available on the internet.

It is a new media project filled with abstract emotions, created in a valuable tradition of Bauhaus musical theater mixed with Dada, sound poetry and contemporary digital means of expression. Therefore the location of L'Opéra National de Montpellier has been selected to emphasize our efforts of introducing new artistic forms and production processes to the audience.

Project supported by The Ministry of Culture of the Republic of Slovenia, The Ministry of Culture of the Republic of France, The Municipality of Montpellier, The Council of the Languedoc-Roussillon region and Rhizome.org 2006-2007 Commissions. Co-produced by Intima Virtual Base (Slovenia), Panoplie.org & bram.org (France)

Posted by jo at 06:15 PM | Comments (0)

October 01, 2006

Turbulence Commission:

graphtheory.gif

Graph Theory

Turbulence Commission: Graph Theory by Jason Freeman, with Patricia Reed and Maja Cerar :: [needs Macromedia Flash Player plugin; Internet Explorer 5+, Mozilla Firefox 1.5.0+, or Safari 1.0+]

"Graph Theory" seeks to connect composition, listening, and concert performance by coupling an acoustic work for solo violin or solo cello to an interactive web site. On the web site, users navigate among sixty-one short, looping musical fragments to create their own unique path through the composition. The navigation choices which users make affect future concert performances of the work. Before each performance, the soloist prints out a new copy of the score from the web site. That score presents her with a fixed path through the piece; the order of the fragments is influenced by the decisions that recent web site visitors have made.

"Graph Theory" is a 2006 commission of New Radio and Performing Arts, Inc. (aka Ether-Ore) for its Turbulence web site. It was made possible with funding from The Greenwall Foundation.

BIOGRAPHIES

JASON FREEMAN'S works break down conventional barriers between composers, performers, and listeners, using new technology and unconventional notation to turn audiences and musicians into compositional collaborators. His music has been performed by the American Composers Orchestra, Speculum Musicae, the So Percussion Group, the Nieuw Ensemble, Le Nouvel Ensemble Moderne, and Evan Ziporyn; and his interactive installations and software art have been exhibited at the Lincoln Center Festival, the Boston CyberArts Festival, and the Transmediale Festival and featured in the New York Times and on National Public Radio. N.A.G. (Network Auralization for Gnutella) (2003), a commission from Turbulence.org, was described by Billboard as "an example of the web's mind-expanding possibilities." Freeman received his B.A. in music from Yale University and his M.A. and D.M.A. in composition from Columbia University. He is currently an assistant professor of music at the Georgia Institute of Technology in Atlanta.

PATRICIA REED (1977, Ottawa, Canada) completed her studies in Studio Arts at Concordia University, Montréal (1999). In 2001-02 she attended the residency program of CCA Kitakyushu, Japan; in 2003 she relocated to Europe, through an artists' residency in Prague and Akademie Schloss Solitude in Stuttgart (2003-4, 05). While maintaining an active artistic practice, she also works as a designer with focus on developing interfaces and means of visualizing scientific research, thereby making accessible complex information to a broader audience. Recent design projects include the development of information cartography/exhibition architecture for The Gallery of Research, Vienna (2005) and a sociological web 'diorama', Paris:Ville Invisible, with Bruno Latour (2004) which will be featured in an exhibition at Centre Pompidou, Paris (2007). Graph Theory is Reed's first collaborative work within the field of musical composition. She currently lives and works in Berlin, Germany.

MAJA CERAR is a concert violinist who studied with Aida Stucki- Piraccini in Zurich and with Dorothy DeLay in New York and is currently completing her Ph.D in Historical Musicology at Columbia University. She frequently works with composers, has premiered numerous works written for her, and has been coached by Beat Furrer and György Kurtàg. Since her debut in the Zurich Tonhalle in 1991 she has played as a soloist with orchestras in Europe, given recitals with distinguished artists on international tours (Paris, Rome, Washington, Chicago, New York) as well as at festivals in Europe (including the Davos "Young Artists in Concert," the Gidon Kremer Lockenhaus Festival, the ISCM World Music Days and the ICMC Barcelona), America (Aspen, Vermont, Santa Fe, San Diego), and Asia.

Posted by jo at 09:01 AM | Comments (0)

September 29, 2006

Ping Pong Placard

placard.png

Paris-Taipeh-Kyoto

September 29 - October 1, 2006 :: Paris, France; Taipeh, Taiwan and Kyoto, Japan

The Paris-Taipeh-Kyoto Ping Pong Placard will take place this week end in an appartment of Paris and in other locations in Taipeh and Kyoto. The Placard consist of a series of one-hour performances & live acts being streamed (through Internet) and performed between Paris, Taipeh and Kyoto. There will also be some events and actions requiring or not requiring physical displacement.

You can find the updated information here. You are warmly welcome to this event, which I hope you will enjoy. If you cannot make it to Paris, Taipeh, Kyoto or Bruxelles, you may either want to check out the other "Listening Placard" (see about page linked below of the FAQ) or create one Listening Room yourself. You can also join us on IRC through the placard website.

Thanks a lot to all the people who started the whole thing, helped throughout, participated, shared the fun, played or are going to play, maintained the positive energy and the creative spirit, forwarded or are going to forward this invitation to whoever might be interested, they know who they are.

See you there.
Philippe
http://xlrmx.org

Placard 2006: Placard is an international electronic music headphone festival featuring many artists from different places, experimenting with sound and music. You can check out more info at the about page. The Paris-Taipeh-Kyoto Ping Pong placard will be held from September 29, 2006 to Oct 1 2006 in Paris, France; Taipeh, Taiwan and Kyoto, Japan. It's free and you can apply to play there.

Posted by jo at 10:57 AM | Comments (0)

September 25, 2006

Antoine Schmitt & Jean-Jacques Birgé

nabazmob.png

Nabaz'Mob: an opera

Atari is happy to present Nabaz'Mob: an opera for 100 communicating rabbits by Antoine Schmitt & Jean-Jacques Birgé featuring Nabaztag by violet.

Installation http://nabazmob.free.fr/ from September 27th to October 1st 2006 during Wired Magazine's nextfest at Atari Showroom Javits Center, HALL 3B, Chelsea, NY.

100 Nabaztag meet at Javits Center to all play together an opera specially composed by Antoine Schmitt and Jean-Jacques Birgé after an original idea by Guylaine Monnier. Inviting John Cage, Steve Reich, Conlon Nancarrow and György Ligeti, this musical and choreographic partition in three movements, transmitted via wi-fi, plays on the tension between the music ensemble communion and individual behavior to create a strong and involved showpiece.

Posted by jo at 05:30 PM | Comments (0)

No2Pho/notovo research project

nopho.png

Happy New Ears (festival for New Music), Kortrijk (be)

notovo at Happy New Ears (festival for New Music), Kortrijk (be) :: The No2Pho/notovo research project: investigates the behaviour of language in its many appearances: textual, sonic and visual, as well as gestural or body language. How do these disparate elements relate to each other and how do they organize within a system which includes human and computer as a sender and a receiver [and vice versa].

The notovo-installation [article (PDF)]: as a generative sound installation NoToVo (noise to voice) plays with a connected set of elements. It is composed of dissonant synthetic voices, changing in real time from speech to sound. A physical network and a virtual network. People and voices. Wireless headphones and spatialized soundsources. People walk their traject through space. Their localisation and orientation is tracked. These data modulate the behaviour of realtime generated synthetic voices. As such, a conversation can be physically crossed and experienced according to the visitors' own position. Voices are layered and mixed by movements and trajects through space: the visitor becomes the performer.

No2Pho/notovo is developed by so-on (a collaboration between sukandar kartadinata, johannes taelman, edo paulus, billy bultheel and annemie maes). With the support of the VAF (Vlaams Audiovisueel Fonds).

http://so-on.be
http://okno.be/?id=954
http://happynewears.be

location: Happy New Ears festival, september 23rd --> oktober 8th
meetingpoint: busdascoop - kapucijnenstraat 10 - 8500 kortrijk
opening: saturday september 23rd from 2pm till 8.30pm
open: every monday, wednesday, friday and saturday from 2pm -> 8.30pm; sundays from 2pm -> 6pm

Posted by jo at 11:50 AM | Comments (0)

September 22, 2006

The Collision Project

collisionproject.png

To ‘See’ that which the Eye Cannot

The Collision Project is the result of a collaborative exchange between artist / director Gerhard Marx and composer Clare Loveday, lecturers at the Wits School of Arts. It combines the visual and theatrical skills of Marx with the compositional exploration of Loveday to create a haunting work that is both concert performance, theatrical experience and installation work.

The Collision Project plays with the principles on which both classical and traditional string instruments function; in this case however, the vibration of strings are transferred into the body of a car wreck through cello and violin fragments grafted onto the car. The strings and musical intervention serves to animate through sound, and to explore in the manner that vibration and consequently hearing is used to ‘see’ that which the eye cannot; scientific vibration based practices to explore the geological structures buried in rock, the child not yet born (sonar), the tapping of train wheels to detect fractures, the car’s past embedded in the rusty folds of its current form. It is a collision between aesthetic disciplines, between makers and approaches. But it is the actual collision; the event in the body of the wreck itself, which Marx and Loveday aims to explore in this piece of ‘forensic music’, by drawing a voice from the hollow body of an abandoned car.

Performed by Vusi Ndebele, Sisekelo Pila and Barry Sherman.

THE SUBSTATION, Wits East Campus, Braamfontein
Thursday 21 to Friday 29 September 2006 at 19h00
Book at Computicket
Enquiries 011.717.1376
pisantic[at]theatre.wits.ac.za

Limited Seating Available

[blogged by nathaniel on nathanielstern.com]

Posted by jo at 06:51 PM | Comments (0)

September 21, 2006

BrainWaves

brainwaves.png

Neural Sounds

BrainWaves is a project of complex systems' data representation, like the human brain, developed at the Georgia Tech University by the students of neural engineering (prof. Steve Potter). The experiment is based upon the application of some pattern recognition techniques (through these techniques data - or patterns - are classified basing on a previous knowledge, or basing on statistical data retrieved from the patterns themselves), and it's inspired by our brain ability of perceiving and distinguish different audio samples. The neural activity (in a neuron's cultivation) subjected to sound stimulations is recorded through a series of electrodes and then played through eight speakers. The endeavor is of give a spatial propagation representation of the electric pulses into the cultivation. Moreover eight controllers allows an interaction with the neurons, simulating the pulse propagation starting from different cultivation points. The experiment is presented as an interactive music performance, and the aim of BrainWaves is not onlu to understand neurons' activities but also to construct a musical 'product' with its own aesthetic significance. - Vito Campanelli, NEURAL.

Posted by jo at 02:16 PM | Comments (0)

September 01, 2006

OPERA.tion LIFE NEXUS

cologne_v.jpg

An Evolving Opera

OPERA.tion LIFE NEXUS: Jorge Orta began this mixed media work in 1996 and it will evolve and grow in length until 2006. Based on interdiciplinary collaborations, 3 generations of artists are invited to create a module for the open-ended composition: music scores, dance sequences, sculpture, light and video projections. The federating theme is the heart = Life, the creative idiom from both a symbolic and Human perspective. Its complex mythology renders the symbol poetic.

The Babylonians designated the heart the center of intelligence and memory. For the Egyptians it represented the terrestrial support for the soul. Hypocrite refer to the heart as the intelligence organ. Plato believed the heart was the center of feelings and passion. For Aristotle the heart transformed food into blood and was the emotional centre. Its universal nature helps to converge cultural, social, religious differences and from a scientific angle it exposes a new vision of a medical problem.

OPERA.tion is a locus for debate to address the sensitive subject of life saving organ donation; thousands of people are on a waiting list for organ donations... [F]renzied individualism has left society without hearts and heart !

The opera comes in a kit format and adapts to all venues. Crate sculptures facilitate the transport, preserve the archive of resulting collaborations: objects, music scores, video and photographic works, CD Roms, silkscreen prints, visual poetry, paintings, Body Architectures…, are stored in the wooden crates, which in turn are labeled and stamped with the origin of the multiple actions throughout the world.

Jorge Orta (Rosario, Argentina 1953) lives in Paris Artist and architect, Jorge Orta has been working on large scale ephemeral works since 1973. Protagonist of an urban and social poetic language, he has developed many new and alternative forms to communicate : using mail art, video installations and performance in the early 70's, to large scale projections in the 80's. Over the last 20 years he has devised a polysemic alphabet of signs and symbols which allow him to address very diverse publics, and more recently he has been exploring the symbol "Heart" to consolidate art, mythology and science. Jorge's light projection work took on its veritable dimension in 1992 when he projected giant mobile images inside the Pompidou Centre in Paris and when he embarked on a human experience to project his signs on Machu Picchu mountain in Peru. Since, he has covered the facades of the Venitian palaces at the 1995 Venice Biennale, and more obsolete places such as Capadoccia in Turkey, Cuenca in Spain, Aso volcano in Japan, Chartres and Evry cathedrales and many more.

Posted by jo at 12:09 PM | Comments (0)

Bring Your Own Voice Museum Tour

231001886_d0d0f0c7fb.jpg

Halsey Burgund @ P.S.1

It's like a directed Speaker's Corner run by Le Tigre...only as interesting as the participants AND editor - PJ

WHEN: Sunday, September 17th, 12–6 PM :: WHERE: P.S.1 Contemporary Art Center, 22-25 Jackson Ave, Long Island City, NY :: Musician, Halsey Burgund, will bring his portable recording booth to P.S.1 Contemporary Art Center on September 17th as part of his Bring Your Own Voice Museum Tour. The tour will take Burgund to a number of art museums in the northeast where he will ask interested participants to voice their opinions about art and museums while being recorded. He will then use these recordings as raw material for original music.

Burgund is specifically targeting art museums on this tour because he wants to explore questions about art and museums and their respective roles in individual’s lives as well as in our current society. He will pose questions to participants that are designed to elicit honest responses about why they are at the museum currently, what they think about specific pieces at P.S.1, and what our society would be like without art and institutions to promote the arts.

Upon completion of the tour, Burgund will take the diverse set of thoughts and opinions he gathers in New York, and - along with collections from other museums - will compose original music representing a collective, yet highly personal, view of art and museums today. The notion of creating music out of everyday expressions not normally considered musical is the cornerstone of Burgund’s approach to composition. He is interested in the spoken voice not only as a kind of instrument (the tonalities, rhythms, unintended melodies etc), but also as a method of communicating personal experiences and opinions.

News via: PS1.org

[Posted by Paddy Johnson on reBlog]

Posted by jo at 09:09 AM | Comments (0)

August 24, 2006

One Hundred and Four Thousand

burgund.gif

Call for Voice-Mail Contributions

One Hundred and Four Thousand :: Through the evocative memorials at Forest Hills, people from the past "speak" to us about their relationships and lives. Halsey Burgund uses 21st century technology to add a new layer of expression to this landscape. His sound collage combines music with fragments of conversation he collected during interviews at Forest Hills.

You can download the entire piece to your computer and iPod or listen online: I - Remembering the Dead [mp3 - 5:56] II - Life to Grow [mp3 - 2:16] III - I Just Want to Be Recycled [mp3 - 3:12] IV - How Infinite Memories Can Be [mp3 - 9:18] V - The Stately Oak and the Predatory Owl [mp3 - 4:23]

You can alternatively listen via cell phone by calling 617.344.6796; this system will also collect voice-mail contributions, which the artist will selectively incorporate into the piece on an ongoing basis.

"I wanted to explore the cemetery’s contemplative nature as well as the relationships that various cemetery ‘dwellers’ have to their surroundings. My intent was to create music that evokes my own personal feelings about spending time within the borders of Forest Hills through the words and voices of other cemetery dwellers.

Over the course of several months in early spring, I interviewed 26 people in the cemetery, both visitors and employees. I used these recorded interviews in tandem with traditional instruments. For listeners, the music and voices will mix with the ambient sounds, smells and sights they encounter as they make their way through the cemetery on paths of their choosing." -- Halsey Burgund

Posted by jo at 09:53 AM | Comments (0)

August 21, 2006

PLAY.Orchestra

play1.jpg

Sit and Play

If you go down to the South Bank in London this summer, outside the Royal Festival Hall is a wonderful installation titled PLAY.Orchestra. 56 plastic cubes and 3 Hotspots are laid out on a full size orchestra stage, each cube containing a light and speaker. Sit down on the cube or stand in the hotspot to turn on that instrument and bring 58 friends to hear the full piece. People with Bluetooth phones will be able to receive a ringtone of the piece created, as well as upload their own sound samples in September.

The production blog gives an interesting insight into the installing process & the complexity of the project. PLAY.orchestra is the result of a collaboration between Central Saint Martins College of Art & Design MA Creative Practice for Narrative Environments, South Bank Centre Education and the Philharmonia Orchestra.

Daniel Harris at Central Saint Martins was involved in the project from the beginning, details from his site:

"I was then responsible for the various human, computer and physical interfaces used in the build. 2 miles of cable are used to connect the 60 seats with the control system, which runs on a midi based Max MSP system.

The Objective was to get non-traditional audiences to experience an orchestra from new angles, and to involve themselves in activity of the philharmonia orchestra. For this purpose, a bluetooth system was set up, where members of the public can submit their own samples for composers to put together into a piece that will be played in the last few weeks of the installation."

IamtheMightyJungulator were also involved in the software side.

Photos stolen from Dan Harris Flickr Set.

(thanks for the tip Leslie, via BBC News) [posted by chris on Pixelsumo]

Posted by jo at 05:12 PM | Comments (0)

August 18, 2006

p-tex performs GYOML

ptex.jpg

DiY in a Field

p-tex will perform GYOML in a field on Heysham Barrows near St. Peters Church on Saturday August 19, 2006 at 7.30 p.m. For live streaming copy and paste this URL into your media player's url window.

p-tex will be exploring how computer musicians can perform and create music without the need for mains electric, and using low budget or recycled / salvaged equipment and free open source software. He will also be using various sensors that will be manipulated and triggered by the surrounding environment, as well as by visitors to the site. p-tex has created software patches specifically for the piece which enable the sensors to communicate with the computer equipment. To achieve this he has used free software puredata and processing. For directions to Heysham Barrows, go here.

Posted by jo at 09:18 AM | Comments (0)

August 17, 2006

Full Contact Concert

0boxyx.jpg

Let's Get Physical

Vollkontaktkonzert - Full Contact Concert, by Stefan Brunner and Michael Wilhelm, arranges a musical composition out of a boxing match. Two performers are equipped with microphones. One microphone, connected to a stethoscope underneath the scapula of the performer’s back, records the repercussion of the beats hitting his chest. The other microphone, glued to the performer’s chin, records his breath.

The sounds are post-processed on the laptop controlled by a third person. The whole piece of music unfolds: the introduction is the warm up. Then the performers put on the boxing gloves and a beat-part begins: performers start beating each other in turns. Two punches to the stomach then one to the chest, two to the stomach, one to the chest, etc. It goes faster and faster until the break, all you can hear is the performers' breath and sounds from the laptop. After they have recoverd from the exhaustion, the boxers start to treat each other with more punches to their chests and stomachs.

The video isn't particularly spectacular but i still like the concept. More images. Via Vvork.

00boobu.jpg

There's a few boxing performances and installations around: Videoboxing, a video performance in which Thomas Heijmans and Eliane Roest merge video, new media, music, fashion, choreography and physical violence on stage; Telephoneboxing (left), by Allard Roeterink, turns the phone call into a hard-work physical process; Soundslam, a highly sensitive punching bag with built-in trainer; Cyclone Uppercut, a boxing game that runs in Virtual Reality Simulators. [blogged by Régine on we-make-money-not-art]

Posted by jo at 07:28 PM | Comments (0)

August 08, 2006

Receiver #16

receiver16.gif

Social Networking the Mobile Way

Receiver #16 wants to spark off some ideas about social networking the mobile way: clubbing, seeing your favourite band, sharing memories of a night out or playfully exploring the city, getting to know and experiencing, even creating, music – can mobile add to all these? And how does it affect how we get our friends together for joint action? Does it trigger emergent behaviour? Or is it the ideal means to pull it all together? What do *you* think?

Lee Humphreys: Out with my mobile - exploring social coordination in urban environments :: Tim Cole: The mobile phone as the next electric guitar (or any other instrument you want) :: Rudy De Waele: Connecting cultures through music :: Charlie Schick: One night - a global story of one night in the mobile life :: Antony Bruno: Where the long tail ends :: Karenza Moore: Come together - the use and meanings of mobiles amongst UK clubbers :: Frank Lantz: Big Games and the porous border between the real and the mediated :: Mark Curtis: Mobilising our meat based selves - social planning while on the hoof.

Posted by jo at 02:21 PM | Comments (0)

July 24, 2006

Satellite Jockey

satellite_jockey.jpg

djing Goggle Earth

"If music is about time, Satellite Jockey by Rick Silva is music sparked off space. The perceptive revolution enabled by Goggle Earth is the one of being able to visually navigate most part of the world (including your own neighborhood) through satellite photos. This overwhelming new perspective (a sort of 'God' omnipresent view) is rewriting in the users of this software the spatial coordinates, and the awareness of being in one place, that becomes the center of the universe of choice. Satellite Jockey is a performance that uses Goggle Earth navigating the world's surface through the Google's generated stream of data using it as a source material for audio visual performances. The represented territory, in the software various forms, is then the (infinite) stage on which the performance take place, zooming, flying over, turning around pixel corners or drowning into abstract minutiae. The music follows the visuals, strengthen the vague mood of being ubiquitous and lonely. The author is performing regularly and a preview of one of his latest remixes debuted at the last Futuresonic." [posted on Neural]

Posted by jo at 05:31 PM | Comments (0)

July 18, 2006

Turbulence Commission:

swm_micro_img.jpg

SWM05: Distributed Bodies of Musical-Visual Form

SWM05: Distributed Bodies of Musical-Visual Form [SWM05] -- by Troy Innocent and Ollie Olsen with the Shaolin Wooden Men and Harry Lee -- features the distributed bodies of musical-visual form that are inhabited by the Shaolin Wooden Men (SWM), a virtual band, a 'gang of numbers' -- me(a)tacodeflesh. SWM require your assistance to manifest as media creatures. They invite you to send them images of your local environment in which they can appear. Sending images unlocks access to the SWM05 mobile site which consists of downloadable micromusic ringtones and small screen machinima performances. The SWM are everywhere. In a meshwork of wireless entities, they are media creatures seeking a fragmented existence to be consumed in the nanoseconds of play-time in the emerging wireless net. SWM05 will transfigure the SWM by embodying them in a new materiality.

SWM05: Distributed Bodies of Musical-Visual Form is a 2005 commission of New Radio and Performing Arts, Inc., (aka Ether-Ore) for its Turbulence web site. It was made possible with funding from the Andy Warhol Foundation for the Visual Arts.

BIOGRAPIES

The SHAOLIN WOODEN MEN are a 'gang of numbers' whose bodies are 'made of sound'. In their various manifestations they have released three full-length recordings - "S.W.M. " (1992), "The Hungry Forest" (1994) and "Supermindway" (2001) - and a collection of singles and remixed released on the Psy-Harmonics label. The S.W.M. work across image, sound and interactivity and have performed at DEAF96 and exhibited at ISEA96. Typically, they require the assistance of creative humans to manifest as media creatures to be distributed across the net.

TROY INNOCENT has been exploring the 'language of computers' and the new aesthetics of digital space since 1989. In recognition of this work, Innocent has been described as "the first philologist of virtual reality". His artificial worlds – Iconica (SIGGRAPH 98, USA), Semiomorph (ISEA02, Japan), and lifeSigns (Ars Electronica 2004, Austria) and Ludea (SIGGRAPH2006, USA) – explore the dynamic between the iconic ideal and the personal specific, the real and the simulated, and the way in which our identity is shaped by language and communication. He is currently Senior Lecturer, Department of Multimedia and Digital Arts, Monash University, Melbourne.

OLLIE OLSEN is an Australian composer, synthesist and sound designer who has been producing and performing rock, electronic and experimental music for the past thirty years. Projects include "Max Q," "NO," "Third Eye," "Orchestra of Skin and Bone," "Shaolin Wooden Men," and "I am the Server." Some recent collaborations and projects include performing with Negativland (from USA-2001); guest soloist with the Australian Art Orchestra (2002); and recording with Japanese bands, BOREDOMS and AOA (2001-2002.

HARRY LEE is a web developer working with Macromedia Flash, SQL, PHP and related technologies. Recent projects include database development for lifeSigns, exhibited at the Australian Centre for the Moving Image (ACMI) in 2004, in addition to numerous corporate and education projects. He lectures in multimedia and digital arts in the Faculty of Art & Design at Monash University.

Posted by jo at 01:06 PM | Comments (0)

July 06, 2006

Second Life

robbie_dingo_hyper_cello.jpg

Hyper-Instruments and Immersive Spaces

VIRTUAL MUSIC COMPOSITION: Unsatisfied with just creating virtual plant life, a provactive multiplayer game, and ambitious machinima, Robbie Dingo has also been hard at work creating virtual world musical instruments that actually play in-world in real time. He extensively documents the creation of his "Hyper-Instruments" on his blog here and here, and recently sent me word of a song-writing contest for music composed to be performed on his new SL cello.

immersion.gif

ADVENTURES IN IMMERSION: Seasoned RL/SL architect Jauani Wu takes us on a personal tour of successfully immersive spaces in Second Life, accompanied by a 3D designer's manifesto written e.e. cummings style: immersion requires depth. it's not sufficient for one enclosed space to be finely constructed. it requires that the next space be so as well. and the space after that. immersion works better when the surface of one space frames the next. it reinforces the notion of world. By my lights, mandatory reading for builders looking to truly transform the world according to their vision. Read it all here. [posted on New World Notes]

Posted by jo at 02:45 PM | Comments (0)

July 05, 2006

RainDance + A Light Rain

raindance2_01.jpg

Water Music

RainDance is an interactive sound installation by electronic media artist Paul DeMarinis. The installation uses water to create non-water sounds. Jets of water modulated with audio signals carry sound vibrations that are inaudible to the human ear. The sounds cannot be heard until the water jet is intercepted by a large umbrella. The sound is then decoded and resonated from the surface of the umbrella. The umbrella thereby turns into a new and different kind of loudspeaker.

Diverse electronic and acoustic material is modulated onto the streams so that people strolling along under the water jets can listen to, mix, modify and sequence a variety of sonic material, ranging from musical standards to rhythmic and electronic music.

The installation was exhibited at the Ars Electronica festival in 2001.

LightRain.jpg

Also check out the related work by Paul DeMarinis, A Light Rain, in collaboration with Rebecca Cummins. In A Light Rain participants carry the umbrella into a rainbow created by streams of water. Music can then be heard as the umbrella meets these streams of water.

Check out the video of RainDance. [blogged by Lene Mailund on Digital Experience]

Posted by jo at 08:47 AM | Comments (0)

July 03, 2006

Music 4100 Computers

cocked_large.gif

Call for 100 Online Performers

Sean Kerr's first performance of M4100 is going to happen on July 9th, 12:00, NZ time at the Govett Brewster Gallery, New Plymouth, New Zealand. M4100 requires 100 online performers to create the event. Using a Flash multi-user server, all 100 online performers work together to creat an sonic and visual event that is live mixed by Sean Kerr, at the Govett Brewster Gallery auditorium.

If you are interested in participating in this event, please email your details to Sean Kerr [s.kerr at auckland.ac.nz] M4100 is coinciding with the exhibition 'what color does sound make?' and the sound performance event sound/bodies @ the Govett Brewster Gallery. M4100 is a 2005-2006 Rhizome.org commission.

Posted by jo at 05:21 PM | Comments (0)

June 21, 2006

Sonic Interface by Akitsugu Maebayashi

21webber.jpg

Past Remixed with Present

"Our experience of reality is strongly dependent on the synchronicity of our senses. We must, for instance, hear or touch what we see while we see it, in order to be able to determine reality and in order to decide what to do or how to react. The decoupling in time of sight and sound - like when we first see the lightning and then hear the thunder - can create a disturbing irritation when it affects our immediate surroundings: imagine that you would only hear the cars passing you on the street after they have already past, or that you hear conversations which were held minutes ago in a different location from where you are.

Sonic Interface experiments with human perception by amplifying and manipulating the synchronicity of auditory environment. Equipped with a portable hearing device made of a computer and headphones, the user is invited to walk around the city's public spaces such as squares, shopping malls, and underground stations. The random urban sounds that he hears are first transmitted to the headphones without modification, but then the computer programme begins to create an artificial sonic environment from the sounds that it picks up.

Three different types of software feed the headphones with digitally manipulated sounds. In one instance, the ambient sounds are delayed to different degrees, decoupling the visual and the auditory perception of the surrounding space. Then the sounds are cut up and recomposed into a mosaic with a new chronological order. Finally, the sounds are made to repeat themselves and overlap with each other. In each case, the sonic ambience and the space in which the sounds were formed in the past are being remixed in the present." From Machine Times

Akitsugu Maebayashi (1965) is a Japanese born artist who locates a new sense of reality and consciousness in the act of communication by expanding ones perceptions. His works constantly trigger the communication between the user and the other. His interest in music and musical instruments also provided him with a method for representing time and space. Maebayashi’s work has been exhibited and performed worldwide off-line (Tokyo Design Center, ICC) as well as on-line (Live on-line concert series with Chino Shuichi and others). [via WMMNA]

Posted by jo at 06:39 PM | Comments (0)

June 13, 2006

Sonic Graffiti

121wewe.gif

Geo-tagged musical graffiti

Chia Ying Lee's Sonic Graffiti invites urban artists to collaborate and create music together, while allowing the passersby to enjoy it as well. A system of devices enables graffiti artists to create and geo-tag music in the urban space with real spray cans:

- The sound cap has to be snapped on the top of spray cans to spray out sounds and do simple sound manipulations with gestures. Users create music by overlaying/remixing various paint/sounds from the caps. Each cap can store up to 4 sounds in its memory card. They can be loaded from computers or portable devices like iPod, mobile phone, etc. Gestures to manipulate sound include fade in/out and scratch. Several artists spraying at the same time can create a sound composition.

- The controller is used for listening to the music with earphones when creating, and positioning sounds. It also comes with a recording part can be used for collecting sound samples from the city.

- The Boom box provides a shared listening experience for a group of creators in the public. Collaborations can be achieved both synchronously and asynchronously.

- Audiences can download a dedicated software player to install in mobile devices. Each graffiti is a small radio station. The player tunes into the music of the nearest sonic graffiti automatically while you go through the city. You can also mark the locations of music you like, hence make a personal sonic graffiti map.

This project gives graffiti audio meanings. It may change people’s viewpoint about graffiti. The music can also serve as the soundtrack reflecting the vibes of the city.

More details on Chia-Ying Lee's thesis blog. My images of her installation.

Related: spatial graffiti, Wearable inkjet printer for street art, Audio messages left in space, Graffiti Analysis projections, Hektor the graffiti machine, emergency spray can, Bench'Mark. [blogged by Régine on we-make-money-not-art]

Posted by jo at 08:19 AM | Comments (0)

June 02, 2006

Zipper Orchestra

zip_overall.jpg

Ziptease

Zipper Orchestra is an interactive video installation that combines the Conductor Musical Score as a physical controller and the Zipper Actions Collage Video as musical display.

The screen is filled with 9 video clips from different people zipping and unzipping their clothes. By moving 9 physical sub zippers and 1 main zipper attached in a canvas, users can control the individual zipper motion in the screen as a conductor. The main long zipper will control 9 motions all together. As the audience zips and unzips zippers on the musical score different pitch of string instrument come out from the speaker.

A work by Joo Youn Paek (who also made the polite umbrella!), in collaboration with Gabe Barcia-Colombo (sound design) and Leif Mangelsen (software tech). [blogged by Régine on we-make-money-not-art]

Posted by jo at 04:06 PM | Comments (0)

May 31, 2006

Franziska Baumann + Matthew Ostrowski

1326.gif

VOICE SPHERE

VOICE SPHERE / ARCHITECTURE & DESIRE (for voice, glove, and live electronics)" Featuring Franziska Baumann, Matthew Ostrowski; Slought Foundation | Friday, June 02, 2006; 8:00-10:00pm; $10.00 at Door (Reservation not required)

Please join us for a special concert with Swiss artist Franziska Baumann (voice and sensorglove live electronics) and Matthew Ostrowski (laptop and P5 glove). Baumann and Ostrowski will each play solo, then in duo. The interactive technologies pioneered by these artists enable them to control articulations of their voices and computers in real time via gesture and movement. A variety of sensors connect their music to the real world of physical phenomena.

baumann.gif

As a vocalist, Franziska Baumann explores the human voice as a multi-faceted instrument and has developed an extensive vocabulary of experimental and extended vocal techniques such as multiphonics and glottal clicks, as well as a variety of unique microtonal, timbre-modifying, and percussive vocal techniques that have become her signature sounds. In her performances, Baumann often modulates and "electrifies" her voice through the use of a SensorLab based Cyberglove linked to her Mac. She sculpts her live voice in combination with pre-composed multitrack sound-map via gesture and movement.

Matthew Ostrowski has been using electronics since the early 1980s. In an attempt to bring a truly instrumental quality to live computer music practice, he has developed a system based around the P5 glove, a commercially available video game controller.

utrecht01.jpg

This device is connected to a Max/MSP program of his own design, which uses principles of physical modeling to control musical parameters. By manipulating virtual objects in a multidimensional parameter space, his instrument brings some of the nonlinear behaviors of physical objects into the electronic domain.

Slought Foundation | New Futures for Contemporary Life: 4017 Walnut Street, Philadelphia | Ph: 215.222.9050 | Thur-Sat 1-6pm | Director: Aaron Levy, alevy[at]slought.org

Posted by jo at 11:51 AM | Comments (0)

May 22, 2006

SoniColumn

musicalcolumn.gif

Monumental Music Box

SoniColumn is a high column-like cylinder that can be played by touch. Grids of LEDs installed inside the column light themselves on by the users’ touch and emit unique sounds. When a user cranks the handle, the column slowly rotates itself and plays the light patterns of the user’s touch.

By Jin-Yo Mok who also designed the Light Bead Curtain and is working on a portable version of his Music Box.

SoniColumn is currently exhibited at the bitforms Gallery in Seoul, Korea. [blogged by Regine on we-make-money-not-art]

Posted by jo at 11:16 AM | Comments (0)

May 17, 2006

Michael Markert's m3 at Cybersonica

m3arc_q-01_lisa.jpg

Virtual Music Gate

Michael Markert's m3, a virtual music gate, that sensors body movements in a space between two illuminated columns and processes the data in realtime to harmonic and rhythmic music. The art arises with the interaction of its users and their different and unique behavior in the gate: they can go through or stay in between, shake and bend their body in the gate, dance, stand still or crawl… and listen how their movements are turned into a sound experience.

The gate is operated by a matrix of distance sensors which triggers and alters different sounds. A Cocoa-based software processes the realtime MIDI-Events (harmonize & quantize) and routes the signals to a complex output setup of software synths and beat generators. (video) Currently at Cybersonica. [blogged by Regine on we-make-money-not-art]

Posted by jo at 07:33 PM | Comments (0)

April 25, 2006

BreadboardBand

3-thumb.jpg

Electronic Improvisational Music Performance with DIY wiring

The Breadboard Band is a performing band that uses breadboards made of freely constructed electronic circuits to play music. We produce audio and visual expression through the most minimal, fundamental elements in the form of showing the electronic components of an instrument while directly touching and forming the electronic circuit by hand. The electric signals released from hand-made electronic circuits releases extremely rough and ferocious wave patterns. This performance is based on improvisational interplay, and we pull powerful music into shape through each member's operation, while discovering new sounds by hand.

Keywords: Breadboard, Improvisation, Electronic Musical Instruments, On-the-fly Wiring, Bending, Techno-Noise, iPod, Discrete, Musical Performance, Programmable Device

bbb51123_05.jpg

The Breadboard Band is one that uses a breadboard to perform music. A breadboard is a board that is perforated with connector holes into a grid-shape, to which electronic components are inserted in order to build a prototype of an electronic circuit. The electronic components can be inserted or removed with ease, making it simple to change the wiring with jumper cables. Utilizing the features of the breadboard, The Breadboard Band creates audio and visual circuits on the board, and modifies them during performance.

Today, 100 years from the public performance in 1906 of the Telharmonium, the first electronic musical instrument, The Breadboard Band raises objections toward black-box electronic musical instruments and computers. This objection is raised in the form of showing the electronic components of an instrument, directly touching and forming the electric circuit by hand, and producing audio and visual expression through the most minimal, fundamental elements. This can be considered the hardware version of software programming. The circuit change during a performance is called "On-the-fly Wiring".

bbb_sys.jpg

The performance of the circuits on the breadboard is less than 0.1% of that of electronic audio and video devices offered commercially. The electric signals released from hand-made electronic circuits releases extremely rough and ferocious wave patterns that might destroy a commercial instrument. However, the primal screams of ecstasy released from the electric circuits surge from the depths of modern society that is surrounded by sophisticated information technology, and stirs us with emotion.

The Breadboard Band's performances based on improvisational interplay, and we pull powerful music into shape through each member's operation, while discovering new sounds by hand. Various elements blend together, becoming one from beats made through analog oscillation circuits, riffs made through programmable chips, noises made through magnetic head, scratches made through a hacked iPod, and the videos of changing audio signals. It may be quite humorous to see the serious expressions of the performers as they grapple with small electronic components, but they match any band in vigor and potency.

Posted by jo at 01:48 PM | Comments (0)

Quoth

signature.jpg

NetJamming with Ominiscient Interpreter

Quoth is a dynamic interactive fiction system, in which authoring is done from a player's perspective, from within the running work. Quoth draws upon the concepts of pervasive anthropomorphisation, executable natural language, and revisionist narrative. The major use of Quoth so far has been for musical livecoding.

In traditional interactive fiction, the player speaks to an ominiscient interpreter. There may be dialogue with "non-player characters", but it is mediated by the interpreter. In Quoth the player is always speaking directly to some item in the universe. The traditional omniscient interpreter is represented by the universe itself. This allows for each item in the universe to have a different vocabulary, or even a different "interpreter" altogether. It also provides the player with more fluid interaction with each item...

Posted by jo at 08:14 AM | Comments (0)

April 24, 2006

Cobi van Tonder

vanTonder250x.jpg

Skateboard music interface

Cobi van Tonder, author of the brilliant Ephemeral Gumboots, has been commissioned a new work for ISEA2006. The project, Skatesonic, uses the motions and sounds of skateboards and explores their inherent ambient rhythm to create music. In a way, each move translates to musical parameters and the rider ends up skating through a landscape of music (which s/he influences over time).

Skatesonic will work in both solo and group situation. The system "listens" to space through movement, which it maps out and translates into music. Each of the four boards will map to a unique sound and structural parameters, so if there are 4 riders they will be able to jam like a band. For example, Skatesonic will allow skaters to buffer through a sound file in Max, meaning that as they rolls over a certain distance it is as if they have a record needle under the board, and every inch of movement progresses the sound. The live microphone input also reveals information about the texture of surface under the board and intensity of movement. From an interview with the artist by Sylvie Parent. [blogged by Regine on we-make-money-not-art] Related project: Musique Concrete by Simon Morris.

Posted by jo at 05:29 PM | Comments (0)

March 12, 2006

CHINA GATES: Mobile Music Performance

chinagates.gif

Work for Tuned Gongs and Wrist Conductor

The Digital Art Weeks 2006, organized by members of the Computer Science Department of the ETH Zurich, is looking for up to twelve persons who are interested in contemporary music and art who would be interested in participating on a voluntary basis for the performance of a new Mobil-Music work under the direction of Sound Artist, Art Clay using GPS and mobile computer technologies.

Aesthetically, the work China Gates for tuned gongs and Wrist Conductor is rooted in works for open public space and belongs to a series of works, which celebrate the use of innovative mobile technologies to explore public space and audience. The work is technically based on possibilities of synchronizing a group of performers using the clock pulse emitted by satellites. The GPS Wrist Conductor signals each player when to hit the gong. An intense rippling effect results as the players gradually move around the park and the music of the gongs shift back and fourth from intense chords to exotic melodies.

The preparation for the event will take place in the form of a mini-workshop on mobile music and sound in open space.

Persons interested in participating are asked to apply by sending a message to the below stated email address. Please include a short biography (50 words) telling us about yourself and why you would you like to participate. Selected players will receive a festival pass for all Digital Art Weeks 06 events. Send to arthur.clay[at]inf.ethz.ch

Posted by jo at 11:30 AM | Comments (0)

March 08, 2006

sCrAmBlEd?HaCkZ!

scrambled.gif

Realtime-Mind-Music-Video-Re-De-Construction-Machine

Copyright-abusing-machine and creative intelligence instrument at the same time, sCrAmBlEd?HaCkZ! is described by its creator Sven König, as a bastard between database and as a sensitive composer for radical plagiarism.

sCrAmBlEd?HaCkZ! attempts to develop an artistic strategy that could shed some light on evident but confusing problems of intellectual property.

The mind music machine is a software which consists of a pre-analyzer, a database and a synthesizer. Using the pre-analyzer it is possible to automatically split up audio material into small musically and rhytmically meaningful snippets. The sonic properties of each snippet are extracted and saved in a database so that a soundpool of samples referenced by their sound signatures is available.

The synthesizer analyzes an audio input stream and again splits it up into small snippets and calculates their sound signatures. For everyone of the input snippets the best match out of all the snippets in the database is found and each input snippet is continuously replaced by the best matching (most similar-sounding) snippet from the database. The audio input, which can be other music or as I use it, just human voice, is virtually describing music to be automatically constructed out of samples found in the database. Video. Performance during the VIPER festival, in Basel, Gare du Nord, 18 March 2006. [blogged by Regine on we-make-money-not-art]

Posted by jo at 05:29 PM | Comments (0)

February 22, 2006

fastbreeder

wwjbig.gif

An Experimental Genetic Programming Synthesiser

Fastbreeder is essentially a 4 button synth. The idea is to grow code by choosing from a range of automatically generated variations of functions, you don't have to know how they work, but each function creates a sound which can be selected by you. The following generation is then created containing mutants of your chosen sound. You can refine and develop the sound just by auditioning and choosing the best one each time.

The interface shows you the code graphically, as a tree. These trees are attempts to clearly represent code structure, giving you some feedback on what is being evolved. Conventional listings are not that informative with GP (genetic programming), as deeply functional code such as this is hard to read linearly...

Posted by jo at 06:40 PM | Comments (0)

February 03, 2006

Helen Thorington

net_perf_blog2.gif

Music, Sound and the Networked Performance Blog

"...The concept of the "new" that most of us carry with us is one in which something that has existed (the old) is replaced with something that has come into existence only recently. It's a sort of "either-or" model. Digital media, however, are about intermingling and fusion. They are about "both-and". As a colleague recently remarked, the overlapping of differences [and similarities] that Digital Media allow [makes for] a staggering hybridity of not only art forms but of just about everything amenable to the digital…This activity, however, creates its own ruptures, its own "newness" and unrecognizables.

Is this new? Well yes and no. Anyone familiar with the work of Dick Higgens will remember that while he did not use the word "hybrid," he did identify "fusion," not only as a basic hunger in us all, but as a characteristic of the new arts of his time: He spoke of happenings, for instance, as a three-way fusion between theater, music and visual art. He suggested that "… art performances, so prevalent just now," might be a five way fusion, between theater, literature, music, visual art, and life?"" From Music, Sound and the Networked_Performance Blog [PDF] by Helen Thorington.

Posted by jo at 11:29 AM | Comments (0)

February 02, 2006

Transmediale 06: Maurice Benayoun

exhibit_benayoun_maurice_01.jpg

Emotion Vending Machine 2006

Also at Transmediale 06: Maurice Benayoun's Emotion Vending Machine--The mechanics of the world’s emotions evolve in the zone where economics and politics converge, in the world of product placement. Through its concept of artistic merchandizing, the Emotion Vending Machine deals with the production of emotions in an ironic way. The Machine takes Internet data as a global pool of emotions. Users can select up to three emotions from a list of nine emotional states, including hate, desire, or despair. The emotions are represented by 3D-maps of word clusters, extracted from the web and generated in realtime.

These maps show the emotions of the world as they are present on the web at that moment, mapped onto the actual position of major cities on the globe, a mix which can also be read like a music score. The musical interpretation deciphers the cities and their emotional polarity to produce a specific musical result by adapting rhythm, coloration and evolution to the selection of the emotional states made by the user. After listening to the personal result through the integrated speakers, users can plug in their USB stick or MP3 player to load their emotional sound remix.

Music: Jean-Baptiste Barrière; Software: Birgit Lichtenegger, Artem Baguinsky, Marloes de Valk (V2); Production: CITU, Paris.

Posted by jo at 03:32 PM | Comments (0)

January 24, 2006

Upgrade! New York

ugpo2.jpg

Deep Performance

Upgrade! New York presents Pauline Oliveros performing with Tintinabulate Ensemble from Rensselaer Polytechnic Institute's and Meme Improvisation Group (MIG) from Brown University. A discussion will follow. January 26, 2006; 7:30 PM - 9:00 PM, Eyebeam, 540 W. 21st St. This event is open to the public and free with a suggested donation.

Pauline Oliveros (1932) is one of America's most vital composers. Deep Listening, her lifetime practice, is fundamental to her composing, performing and teaching. Tintinnabulate is an ensemble of improvisational players whose voices and senses are tuned through the practice of Deep Listening. We achieve harmony through intuition, play, and sympathetic resonance. Founded by Pauline Oliveros, and based at RPI's iEAR (Integrated Electronic Arts) program in Troy, NY, Tintinnabulate has initiated a series of Distance Performances: live, co-located, Internet performances with improvisational ensembles across the United States.

MIG is an ensemble focused on technologically mediated improvisation that draws from the Free Jazz and Experimental Music Tradition. Members include both Brown University and Rhode Island School of Design students. MIG is an outgrowth of MEME, the multimedia and electronic music composition program at Brown University.

More About The Performers: During the 1960's John Rockwell named Pauline Oliveros' work as one of the most significant of that decade. In the 70's she represented the U.S. at the World's Fair in Osaka, Japan; during the 80's she was honored with a retrospective at the John F. Kennedy Center for the Performing Arts in Washington D.C.; the 1990's began with a letter of distinction from the American Music Center presented at Lincoln Center in New York; In 2000 the 50th anniversary of her work was celebrated with the commissioning and performance of her Lunar Opera:Deep Listening For_tunes. She serves as Distinguished Research Professor of Music at Rensselaer Polytechnic Institute, Troy, NY; Darius Milhaud Artist-in-residence at Mills College, Oakland, CA; and president of Deep Listening Institute in Kingston NY.

Tintinabulate Ensemble:
Alex Chechile
Bart Woodstrup
Jesse Pearlman Karlsberg
Yael Kanarek

Meme Improvisation Group (MIG):
Kevin Patton
Damon Baker
Joseph Butch Rovan
Carmen Montoya

Tintinnabulate and MIG first performed together in the Fall of 2005. The ensembles performed over the Internet, each from their respective location. This performance at Eyebeam, hosted by The Upgrade, brings the two ensembles together into the same physical space for the first time.

theupgrade_index.gif

The Upgrade! is a monthly meeting for professionals in the field, providing an open forum for the artists, designers, critics, curators, and educators that form New York's digital arts community. Eyebeam sponsors their activities, which include lectures and group discussions. New media artist, Yael Kanarek, initiated The Upgrade in April 1999, and currently coordinates the program in conjunction with Eyebeam. The monthly gatherings include artist presentations, lectures and group discussion.

Posted by jo at 07:31 AM | Comments (0)

January 23, 2006

Fyra ofoner

ophones_body_bg.jpg

Invitation and Unwritten Score

Fyra Ofoner (Four Ophones)--by Erik Sandelin and Magnus Torstensson--is an invitation and an unwritten score for a composition that lasts as long as the exhibition does. You and other visitors take part as audience, musicians, and co-composers. No one knows what will happen or how it will sound. Be prepared for everything from subtle interference etudes, to collaborative chanting and big, bad rhythm orgies.

An ofon is a loudspeaker with an attached telephone headset. When a visitor has recorded a sound into an ofon, it is repeated infinitely until another sound is recorded into the same ofon. When all four ofons are used, no one can predict what will happen or how it will sound. Videos on the website. [blogged by Regine on we-make-money-not-art]

Posted by jo at 02:58 PM | Comments (0)

Thomson & Craighead

Decorative newsfeed 1.jpg

Audio and Visual Machines

January 26 – March 3, 2006 at Mejan Labs: Akademigränd 3, SE-111 52 Stockholm, Tel: +46(0)8-796 60 30; info[at]mejanlabs.se Open: Tue-Fri 11-17, SAT-SUN 12-16, closed Mondays.

At Mejan Labs Thomson & Craighead present two installations; Decorative Newsfeeds and Unprepared Piano. Decorative Newsfeeds could be described as a digital automatic drawing -a sort of contemporary update of Jean Tinguely’s drawing machines perhaps, but in this case with readable authentic up to the minute news headlines gathered in real-time from the web. The headlines are updated continuously and projected in evolving trajectories that weave and intersect each other according to a simple set of rules.

send_binary.jpg

In Unprepared Piano, a Yamaha Disklavier grand piano is connected to a database of music MIDI files appropriated and compiled from all over the web. This library of found data is then “performed” automatically in the gallery with the full authority one associates with a concert grand piano. Each MIDI file contains an electronic score for a whole piece of music with different tracks for different instruments. When the piano plays one of these scores, it switches between the tracks randomly, which means it will sometimes play a piano part correctly but may also render drum parts, string sections and marimbas etc. in awkward configurations and combinations. On a monitor it is possible to see real-time information about what is being played.

The title Unprepared Piano specifically references the composer John Cage and his method of preparing a piano by fastening nails, coins and so on directly onto the strings inside the instrument, and in doing so to change the sound of it when played. But the title also refers to the piano being unprepared for the information it accesses and with no human performer and no way of knowing exactly what might be performed, Unprepared Piano becomes an endless generative mechanism for the performance of unique musical improvisations.

Thomson & Craighead: British artists Jon Thomson and Alison Craighead have been working together since the early 1990's. They are considered to be among the leading artists on the British new media art scene. Much of their work to date has paralleled the development of Internet technology by way of more traditional media such as video, sound and gallery installation, and they began working with electronic networks and communications systems when the world wide web first appeared in the mainstream around 1995. As this technology has improved and internet connections have become faster, they have begun to re-combine internet technology with their gallery work exploring the consequential possibilities of linking the internet to these kinds of physical spaces.

Thomson & Craighead often use the web as a gigantic database, procuring and reconfiguring existing material in real-time to offer new meanings and perspectives on the way in which we all might perceive the world around us. This kind of approach is in keeping with many methods used across the whole canon of contemporary art: real-time and generative processes, randomness, the recycling and misuse of information and technology and a process based flow that goes from one stadium to another.

Posted by jo at 10:13 AM | Comments (0)

January 05, 2006

Feedtank Experiments

transpose.jpg

TransPose + VideoTank

TransPose (left) is an audiovisual performance instrument whose interface encourages dramatic movement. It was conceived to address the absence of physical human expression found in most electronic music based performances.

In TransPose, the performer's silhouette is projected in front of them along with noteboxes, which produce a sound when touched. The performer can change between instruments and record loops that can be turned on or off to create layered compositions.

Computer vision technology captures the performer’s physical actions captured and translates them to audio in real time. The performer sits in front of a camera, and his or her silhouette is projected in front of them in relation to a number of predefined trigger areas called “noteboxes.” Using his/her silhouette to overlap the noteboxes, the performer triggers various tones.

c-blur-pic1.jpg

In rhythm mode, six different drum sounds are mapped to noteboxes which the performer can strike to play a beat. In the melody mode, two instruments are mapped to two elongated noteboxes on either side of the performer. The hand’s vertical positioning inside these noteboxes determines the tone’s pitch. Its horizontal positioning determines the tone’s volume. The performer can change between instruments and record loops that can be turned on or off to create layered compositions. [blogged by Regine on we-make-money-not-art]

VideoTank (above) places the participant in a reactive environment that emulates the sound and movement of liquid. The more movement that occurs, the more the VideoTank environment is agitated.

Posted by jo at 03:59 PM | Comments (0)

January 03, 2006

Death and the Powers

powers.gif

Hyperinstruments, Sonitronics, and Musical Propulsion

"...In the fall of next year the incredible-sounding opera Death and the Powers will go on, with music by Tod Machover (of the MIT Media Lab; The Brain Opera, Resurrection), libretto by Robert Pinsky (Mindwheel, The Figured Wheel, Jersey Rain, The Favorite Poem Project), robotics engineering by Cynthia Breazael (MIT Media Lab), and production design by Alex McDowell (Minority Report, Fight Club, The Crow)...(blogged by Nick on Grand Text Auto)

"...The music of Death and the Powers will represent a bold step forward towards a new kind of opera. Innovative vocal techniques will be designed especially for this work, especially for Simon who will virtuosically control extensions and manipulations of his own voice the way DJ’s spin turntables. Miranda (soprano), Evvy (mezzo soprano), and Nicholas (tenor) will sing in traditional fashion on stage, sometimes amplified, sometimes not. Simon (bass-baritone, with an unusually extended range and exceptional acting ability) will always be on stage, and will communicate directly as well as through The System. His "interludes" allow the audience to glimpse Simon’s changing inner state, and provide dramatic musical propulsion and musical continuity for the whole opera.

The vocal music will range from the lyrical, expressive singing of members of the Powers family, to the robotic bird that vocalizes during key moments of the opera, to the “voice” of Simon Powers – careening between speech and song, memory and immediacy, breath and bravura – that is at once expiring and transcendent.

The instrumentation for the opera calls for a small ensemble (ca.10 players: 5 strings, 3 winds, 1 percussion, 1 keyboard) located in the pit. Players will perform on specially designed, next-generation Hyperinstruments which will represent significant advances over our current ones, in gestural sophistication, beauty of sound, and simplicity of use. Unlike current electronic instruments, these new Hyperinstruments will allow each performer to control his/her precise sound mix and balance, with overall balance of instruments and voices being modulated by the conductor. Another innovative musical feature of the opera will be the first-ever use of sonic animatronics ("sonitronics"), or physical, sculptural elements in the pit and on stage which will be “performed” directly from the instrumental Hyperinstruments – the result will be a symphonic, kinetic jungle of percussive, wind, and vocal-like sonorities." From the Overview.

Posted by jo at 06:57 PM | Comments (0)

Controlling Music with DIY Interactive Gloves

laetitia2.jpg

"Interactive artists and musicians have long experimented with sensor-packed gloves for controlling music, sound, and video. There's Laetitia Sonami, who controls Max/MSP with her Lady's Glove, and many other projects like the Hypersense Complex flex sensor glove-cum-gestural software as seen here this summer. Laetitia's glove is elegantly sculptural, as seen below, and with years of practice performing with it, she's built a whole performance practice around the glove as an instrument..." Continue reading Controlling Music with DIY Interactive Gloves by Peter Kirn, Create Digital Music.

Posted by jo at 02:17 PM | Comments (0)

December 14, 2005

BOULDER - SCOTTSDALE - SYDNEY - WWW

24count.jpg

24 HOUR COUNT: "a blog jam"

The 24 Hour Count is a multi-media blog band made up of Colorado artists Mark Amerika, Rick Silva, and Nathaniel Wojtalik. For this newly commissioned 24-hour online blog performance, the artists will use a variety of media including the Internet, mobile phones, digital video and photo cameras, mini-disk recorders, musical instruments, and many computer software programs to improvisationally remix, interpret, and respond to current events while filtering their "digital readings" through the prism of Count Lautréamont's "Songs of Maldoror," a classic French text written in the 19th century and whom the Surrealists adopted as the progenitor of their significant 20th century movement.

Sponsored by the Scottsdale Museum of Contemporary Art, the Alt-X Online Network, and the TECHNE lab at the University of Colorado at Boulder, the artists performing this distributed multi-media blog performance will be located in three different locations: Sydney, Australia, Boulder, Colorado, and Scottsdale, Arizona.

The event starts December 17th at 14:00 MST/Arizona; 16:00 EST; 21:00 London; 22:00 Berlin; December 18th, 08:00 Sydney.

For 24 hours straight, the three distributed artists will use the same blogging website as the virtual location for their ongoing multi-media jam session. The performance will coincide with the opening of the SOUTHWEST.net:Techno show in Scottsdale, and a living archive of the blog jam site will remain online both in the gallery through May 14, 2006, and on the web throughout the duration of the exhibition and beyond.

Posted by jo at 10:08 AM | Comments (0)

December 05, 2005

Baby Love

6cups.jpg

Drive Me Drive Me Crazy

Baby Love situates human and its baby clones in a perpetual spin of fairground teacup ride. Tea and sympathy, love and ME-motion. Love songs, uploaded by the public and transmitted via 802.11 wireless network by the public are coded as ME-data in the cloned locker babies. Revolt against mirrored self, the clone babies reprocess the networked ME-data during the joy teacup ride. By taking a teacup ride with the babies, the ME-data are retrieved, played back, shuffled, and jumbled. A gentle ride turned fast spin, the data jams and jammed, we are left to sort out the ME with the babies in the storming teacups. The crash would eventually happen. Upon the carsh when the teacups bump into each other, the clone babies exchange ME data and broadcast the remix on the web. Shu Lea Cheang's installation is at the Palais de Tokyo in Paris from Dec. 8 through January 8, 2006.

Project Background: The Locker Baby project, first conceived in 2001, consists of 3 installation plans : baby play, baby love, and baby work. The first installment, baby play, was commissioned and exhibited at NTT[ICC]( InterCommunication Center ), in Tokyo in 2001. The Locker Baby project recalls Ryu MurakamI's noted novel Coin Locker Babies (1980) in which two boys abandoned at birth in one square foot metal boxes grew up haunted with the sound of human heart beats, those of their birth mothers. The updated version of locker baby proposes a fictional scenario set in year 2030. The DPT (DollyPolly Transgency) clones locker babies with genes extracted from deep sea pearls harvested off Okinawa Island . Coin lockers situated in busy Tokyo train stations are the breeding grounds for underworld test tube locker babies. The biobot locker babies are the clone generation of our scifi fantasia reality, entrusted to receive, store, transmit and negotiate human memory and emotion. Locker baby holds the key to unlock a networked inter-sphere of ME (Memory-Emotion ) data. [via Regine on we-make-money-not-art]

Posted by jo at 01:16 PM | Comments (0)

December 02, 2005

Virtual Air Guitar

dn8383-2_250.jpg

Air guitarists’ rock dreams come true

"Aspiring rock gods can at last create their own guitar solos - without ever having to pick up a real instrument, thanks to a group of Finnish computer science students. The Virtual Air Guitar project, developed at the Helsinki University of Technology, adds genuine electric guitar sounds to the passionately played air guitar.

Using a computer to monitor the hand movements of a "player", the system adds riffs and licks to match frantic mid-air finger work. By responding instantly to a wide variety of gestures it promises to turn even the least musically gifted air guitarist to a virtual fret board virtuoso.

Aki Kanerva, Juha Laitinen and Teemu Mäki-Patola came up with the idea after being invited to develop a virtual instrument as part of their coursework. "The first thing that came to mind was an air guitar," Kanerva told New Scientist. The resulting system consists of a video camera and a computer hooked up to an appropriately loud set of speakers. A player then needs only to don a pair of brightly coloured gloves in order to rock out. Computer vision software automatically keeps track of their hands and detects different gestures, as a video of the system in action demonstrates (22MB, requires Windows Media Player and DivX codec for the visual aspect of the footage)..." Continue reading Air guitarists’ rock dreams come true by Will Knight, NewScientist.com news service, November 2005. [via Eyeteeth]

Posted by jo at 09:20 AM | Comments (0)

October 27, 2005

WebJam 09

atone_03.jpg

decentred | distributed improvisation

WebJam 09: Saturday 29th October, 8-11pm, Norwich Gallery, UK, FREE ENTRY. Live Webcast Online from 7.30 p.m. (performance starts at 8:00 p.m.). Connect at http://www.n0media.net. Live Sound/ Visual Performances featuring Phil Archer, Andre Bosman, Laura Cannell, Shigeto Wada, Tonesucker, Hermits' Squeal, BabyGrand, Hoofus, Dan Tombs, Tom Simmons, and Liam Wells.

Webcasts require QuickTime Player (version 6+): Download QuickTime Player for free. Directions/Gallery info at http://norwichgallery.co.uk

Posted by jo at 11:17 AM | Comments (0)

October 19, 2005

SimpleTEXT

simpletextlogo2.gif

a cell phone enabled interactive performance by Family Filter

When: Wednesday, October 26, 2005 (8 pm); Where: New York University (NYU), Kimmel Center for University Life, 8th floor, 60 Washington Square south, Corner of 4th St and LaGuardia Place (on Washington Square Park). [Map] FREE TO ALL. Bring your Cell phone and Wireless Laptop! More info about the Handheld Event.

SimpleTEXT is a collaborative audio/visual public performance that relies on audience participation through input from mobile devices such as phones, PDAs or laptops. SimpleTEXT focuses on dynamic input from participants as essential to the overall output. The performance creates a dialogue between participants who submit messages which control the audiovisual output of the installation. These messages are first parsed according to a code that dictates how the music is created, and then rhythmically drive a speech synthesizer and a picture synthesizer in order to create a compelling, collaborative audiovisual performance.

SimpleTEXT focuses on mobile devices and the web as a bridge between networked interfaces and public space. As mobile devices become more prolific, they also become separated by increased emphasis on individual use. The SimpleTEXT project looks beyond the screen and isolated usage of mobile devices to encourage collaborative use of input devices to both drive the visuals and audio output, inform each participant of each other's interaction, and allows people to actively participate in the performance while it happens.Our purpose with the performance is to create the possibility of large-scale interaction through anonymous collaboration, with immediate audio and visual feedback. SimpleTEXT encourages users to respond to one another's ideas and build upon the unexpected chains of ideas that may develop from their input.

Support/Sponsors:

SimpleTEXT is created by Family Filter, a collaboration between Jonah Brucker-Cohen, Tim Redfern, and Duncan Murphy. It was originally funded by a commission from Low-Fi, an new media arts organization based in London, UK. This event is sponsored by NYU's Program Board and the "Handheld" show.

URLS:
SimpleTEXT
Low-Fi
Jonah Brucker-Cohen
Tim Redfern

Posted by jo at 03:00 PM | Comments (0)

October 18, 2005

Three Crosses of Queensbridge

bikegrid.jpg

Acoustic Conversions

London-based artist Kaffe Matthews' Three Crosses of Queensbridge is a symphony for radios and bicycles. Replacing notes written on a musical staff are drawings on a neighbourhood map: providing various itineraries for visitors to follow on bikes, the artist broadcasts an electronic composition over a live radio signal. Cycling around the streets with radios receiving the broadcast, participants become performers and passers-by become accidental audience members.

The work is part of the Sounds like Drawing exhibition, at the Drawing Room, East London, 13 October - 20 November 2005. [via Regine on we-make-money-not]

Posted by jo at 06:18 PM | Comments (0)

DIAL-A-DIVA

dad-tit-orng.gif

Call For SINGERS and CONCERT GOERS

:: scroll down for info in French / German / Spanish / Portuguese::

Take Part in DIAL-A-DIVA an event spanning the globe for 24 hours,connecting singers and listeners live by telephone. The project is inspired by a period in history when the telephone was the first broadcasting medium for entertainment. From the 1880s to 1920s phone companies in many countries including France, Britain, Hungary and America set up subscription services connecting opera houses and theatres to a remote audience, listening via the telephone network. Dial-a-diva serves a similar function, but reflects today’s global telecommunications and social networks.

The event will start at 08:00 GMT on December 3rd in time for evening concerts in New Zealand and Australia. Then as time moves on, concerts will begin throughout Asia, the Middle East, Africa, Europe and ending with the Americas, 24 hours later. An installation at CCA Gallery, Glasgow, Scotland invites visitors to participate and view the hub of operations throughout the project.

To be part of Dial-a-Diva all you need to do is sing into a telephone, or use your phone as a microphone wherever singing happens. The Dial-a-Diva call centre will link as many different types of singing, from as many locations as possible to a wide international audience of telephone listeners. Whether at home, in a concert hall or bar, whether professionally or just for fun, whether accompanied or solo, whatever the singing Dial-a-Diva would love to have you taking part. Sign up, we call you back at the time of your concert wherever you are - world wide participation is free.

SIGN UP
online: www.dialadiva.net
email: dialadiva[at]gmail.com
phone: +44 (0) 141 227 1830 (UK landline)

This project is supported by a Creative Scotland Award and technically realised by Coveline, Intercall Europe, Commercial Decisions, Rook Technologies Inc. & CCA Gallery, Glasgow, Scotland.

----------------(Castellano)---------------

DIAL-A-DIVA :: Participar :: ¿Eres cantante o asistente a conciertos?

Dial-a-Diva es un evento que se extiende por todo el mundo durante 24 horas, conectando cantantes y oyentes por teléfono en directo. El proyecto de Zoe Irvine está inspirado en el periodo de la historia en el que el teléfono era el primer medio de radiodifusión para espectáculos. Desde la década de 1880 hasta la de 1920 las compañías telefónicas de muchos países, incluidos Francia, Gran Bretaña, Hungría y América, ofrecían servicios de abonado conectando teatros de ópera con una audiencia distante, que escuchaba el concierto por conexión telefónica. Dial-a-diva hace esta misma función, pero refleja las redes sociales y de telecomunicaciones globales de nuestros días.

El evento empezará a las 08:00 GMT del 3 de diciembre, a tiempo para los conciertos de la tarde en Nueva Zelanda y Australia. Después empezarán los conciertos en Asia, Oriente Medio, África y Europa, para acabar en América 24 horas más tarde.

Una instalación en Galeria CCA, Glasgow, Escocia, invita al espectador a participar y ver el centro de operaciones del proyecto.

Participar - Para ser parte de Dial-a-Diva lo único que tienes que hacer es cantar por teléfono o usar el teléfono como micrófono allá donde vayas a cantar. La línea de atención telefónica de Dial-a-Diva conectará todo tipo de cantantes, desde todas las localizaciones posibles, a un amplio público internacional de oyentes telefónicos. Ya sea en casa, en una sala de conciertos o en un bar, ya seas profesional o simplemente por divertirte, estés acompañado o sólo, ya sea una canción u otra, Dial-a-Diva está deseando que participes. Inscríbete, te llamaremos a la hora del concierto donde estés – está abierto a todo el mundo.

Inscripciones:
online: www.dialadiva.net
email: dialadiva[at]gmail.com
teléfono: +44 (0) 141 227 1830

Este proyecto esta subvencionado por Creative Scotland Award y cuenta con realización técnica de Coveline, Intercall Europe, Commercial Decisions, Rook Technologies Inc. & CCA Gallery, Glasgow, Escocia.

----------------(Deutsch)---------------

DIAL-A-DIVA :: TEILNEHMEN :: Sind Sie Sänger/in oder
Konzertbesucher/in?

Dial-a-Diva ist ein weltumspannendes Event, dass Sänger/innen und Zuhörer/innen für 24 Stunden live über Telefon verbindet. Das Projekt ist inspiriert von der historischen Periode in der das Telefon als erstes Übertragungsmedium für Unterhaltung verwendet wurde. Telefongesellschaften aus vielen Ländern, unter anderem aus Frankreich, Grossbritannien, Ungarn und Amerika haben zwischen 1880 und den 20iger Jahren des 19.Jahrhunderts ein Abonnementservice eingeführt, dass einer abgeschiedenen Zuhörerschaft ermöglichte, Aufführungen aus verschiedenen Opernhäuser und Theater live via Telefon zuzuhören. Dial-a-Diva hat eine ähnliche Funktion und reflektiert dabei die heutigen globalen telekommunikations und sozialen Netzwerke.

Das Event beginnt am 3.Dezember 2005 um 08:00h GMT – zeitgerecht für Abendkonzerte in Neuseeland und Australien. Den Zeitzonen veresetzt entsprechend, beginnen die Konzerte in Asien, im mittleren Osten, Afrika, Europa und enden 24 Stunden späeter in den Amerikas. Eine Installation in der CCA Gallerie Glasgow, Schottland, ladet Besucher zum Mitmachen und zum Beobachten der gesamten Operation ein.

TEILNEHMEN
Sind Sie Sänger/in oder Konzertbesucher/in?. Um dabei sein zu können benötigen Sie nur ein Telefon in das Sie selbst Singen oder einen Ort wo gesungen wird und Ihr Telefon dabei als Mikrofon benutzen. Dial-a-Diva verbindet möglichst viele unterschiedlichste Gesangsrichtungen von möglichst vielen unterschiedlichen Orten mit einer breiten internationalen Telefonzuhörerschaft. Ob zu Hause, in einer Konzerthalle oder in einer Bar, professionell oder zum Spass, ob eine Stimme oder viele, was auch immer der Gesang ist, Dial-a-Diva würde sich sehr über Ihre Teilnahme freuen.

Melden Sie sich online, via email oder Telefon an und Sie werden von uns zum Zeitpunkt des Konzerts kostenlos zurückgerufen – wo auch immer Sie sich befinden.

Anmeldung durch:
online: www.dialadiva.net
email: dialadiva[at]gmail.com
telefon: +44 (0) 141 227 1830 (UK landline)

Dieses Projekt ist von Creative Scotland Award gefördert und technisch umgesetzt von Coveline, Intercall Europe, Commercial Descisions, Rook Technologies Inc. & der CCA Gallerie Glasgow, Schottland.

----------------(Francais)---------------

DIAL-A-DIVA :: Vous estes un chanteur ? Un spectateur ? :: PARTICIPER

Dial-a-Diva est une action artistique qui traverse le globe pendant 24 heures, soit l'interconnexion de chanteurs et d'auditeurs en direct au téléphone. Ce projet se réfère à une période de l'histoire où le telephone était considéré comme un médium de diffusion de programme de musiques et d'entertainment. Des années 1880 à 1920, les compagnies de téléphones de nombreux pays comme la France, la Hongrie, le Royaume-Uni et les États-Unis, mirent au point un service de souscription qui proposait des écoutes en directs des Opéras et Théâtres à une audience connectée au réseau téléphonique. Dial-a-Diva rempli aujourd'hui une fonction identique, tout en reflétant l'esthétique et la philosophie des réseaux aujourd'hui.

L'action commence à 08:00 GMT le 3 décembre 2005, en synchronisation avec le débuts de concert lyriques en Australie et en Nouvelle Zélande. Puis d'un décalage horaire à l'autre, les concerts surgiront d'Asie, du Moyen Orient, d'Afrique, d'Europe et s'achèveront en Amérique 24 heures plus tard. Une installation au CCA de Glasgow en Écosse, invite les spectateurs-auditeurs à participer et à suivre le théâtre des opérations.

Pour participer à Dial-a-Diva tout ce qu'il vous suffit de faire c'est de chanter dans votre téléphone, ou utiliser votre téléphone portable comme microphone partout ou il y a de la chanson. Le centre d'appel Dial-a-Diva va diffuser toute sorte de chants, de partout dans le monde à un public international d'auditeurs téléphoniques. Si vous êtes chez vous, ou dans un bar ou une salle de concert, si vous êtes amateur ou professionel, de n'importe quel style de chanson, Dial-a-Diva sera heureux de votre participation. Inscrivez -vous, on vous appellera à l'heure de votre concert quelque soit l'endroit ou vous êtes. La participation est gratuite.

Inscrivez -vous
online: www.dialadiva.net
email: dialadiva[at]gmail.com
tel: +44 (0) 141 227 1830 (téléphone fix GB)

Ce projet est soutenu par une bourse de Creative Scotland.
Réalisation technique: Coveline, Intercall Europe, Commercial Decisions, Rook Technologies Inc. & galerie CCA, Glasgow, Ecosse.

----------------(Portugues)---------------

DIAL-A-DIVA (Disque-Diva) :: PARTICIPE :: Você é cantor/a ou freqüentador/a de concertos?

DIAL-A-DIVA é um evento abarcando o planeta durante 24 horas, conectando cantores/as e ouvintes ao vivo por telefone. A inspiração do projeto da artista escocesa Zoe Irvine foi um período na história quando o telefone foi o primeiro meio de transmissão usado para entretenimento. Da década de 1880 até a década de 1920 empresas telefônicas em muitos paises, entre os quais a França, o Reino Unido, a Hungria e os EUA., estabeleceram services por assinatura, conectando assim casas de ópera e teatros a um público remoto que ouvia pela rede telefônica. Disque-Diva preenche uma função semelhante, porém reflete as redes globais atuais, tanto sociais como as de telecomunicações.

O evento começará às 8 horas GMT no dia 3 de dezembro em tempo para pegar os concertos noturnos na Nova Zelândia e Austrália. Conforme a hora vai avançando, outros concertos começarão na Ásia, no Oriente Médio e na África, terminando 24 horas mais tarde nas Américas. Uma instalação no CCA (Centro para Artes Contemporâneas) em Glasgow, na Escócia convida visitantes a participarem e observar o centro de operações durante todo o projeto.

PARTICIPE - Você é cantor/a ou freqüentador/a de concertos ? Para participar no Disque-Diva a única coisa que você precisa fazer é cantar no telefone ou usar o seu telefone como microfone em qualquer lugar onde estiver alguém cantando. O centro de ligações de Disque-Diva vai
Conectar tantos quantos estilos diferentes de cantar e tantas quantas localidades possíveis a um grande público internacional de ouvintes por telefone. Seja em casa, numa sala de concertos ou num bar, seja profissionalmente ou somente por diversão, acompanhado ou solo, seja que for o cantar, Disque-Diva adoraria ter a sua participação. Assine já - nós ligaremos para você na hora do seu concerto onde você estiver – a participação em qualquer lugar do mundo é grátis.

online: www.dialadiva.net
email: dialadiva[at]gmail.com
Telefone: +44(0) 141 227 1830 (telefone fixo – Reino Unido)

Este projeto foi possível graças ao prêmio “Creative Scotland Award”, com realização técnica por Coveline, Intercall Europe, Commercial Decisions & Rook Technologies Inc. & CCA (Centro para Artes Contemporâneas) Glasgow, Escócia.
_______________________________________________
a m b i t : networking media arts in scotland
post: ambit[at]a-r-c.org.uk
archive: www.a-r-c.org.uk/ambit

Posted by jo at 04:53 PM | Comments (0)

October 07, 2005

Make A Baby

skincon.jpg

Skin Conducts Music

Make A Baby, by Lucky Dragons, is an ongoing series of experiments into the possibilities of using skin contact between performers as a means of transmitting and controlling data and creating a positive social environment.

Using a knit and applique rug with woven circuits as the touch controller, audience members are invited to participate, building up and breaking down resistive networks by 'passing' signals from skin to skin. Measured changes in these networks are used to play a series of software instruments, allowing for spontaneous bands of touchers to crystallize and disperse all over the place, at once, over and over. Lucky Dragons - Make a Baby, October 7, 2005 8pm, Machine Project, Los Angeles. [blogged by Regine on we-make-money-not]

Posted by jo at 06:04 PM | Comments (0)

September 25, 2005

Instrument of Experience

prescrutinizer.jpg

exploratory listening experience

Music composer/interactive sound artist Ali Momeni and sculptor Robin Mandel collaborate on the perScrutinizer , an interactive installation exploring gesture, expectation and musical interactivity.

Hand gestures are translated to electro-mechanical movements of remote-controlled objects. In response to hand gestures, the objects play their bell jar / cage-structure like a percussion instrument, they collide with, scrape against and bounce off its walls. Real-time software generates additional musical layers that are amplified onto the cage by a coupled loud-speaker driver. While playing the cage with one hand, the user listens to the cage with a stethoscope in the other hand.

The stethoscope is introduced as listening interface: one plays with one hand and listens with the other. The stethoscope allows the listener to move freely from one installation to another, listening to or joining another user's musings on a different instrument. The layout of the installation as well as the design of certain cages encourage simultaneous play by multiple users.

Posted by michelle at 02:30 AM | Comments (0)

September 15, 2005

MOBILE MUSIC TECHNOLOGY

sonic1_thumb.jpg

New Forms

THIRD INTERNATIONAL WORKSHOP ON MOBILE MUSIC TECHNOLOGY, 2-3 MARCH 2006, BRIGHTON, UK.

Combining mobile technology and music promises exciting future developments in a rapidly emerging field. Devices such as mobile phones, walkmans and iPods have already brought music to the ever-changing social and geographic locations of their users and reshaped their experience of the urban landscape. With new properties such as ad hoc networking, Internet connection, and context-awareness, mobile music technology offers countless new artistic, commercial and socio-cultural opportunities for music creation, listening and sharing. How can we push forward the already successful combination of music and mobile technology? What new forms of interaction with music lie ahead, as locative media and music use merge into new forms of everyday experiences?

Following two successful workshops that started to explore and establish the emerging field of mobile music technology, this third edition offers a unique opportunity to participate in the development of mobile music and hands-on experience of the latest cutting-edge technology. The programme will consist of presentations from invited speakers, in-depth discussions about the crucial issues of mobile music technology, hands-on group activities and break-out sessions where participants can get valuable feedback on their work-inprogress projects. The invited speakers include Michael Bull (University of Sussex, UK), often dubbed by the press as 'Professor iPod' for his iPod and car stereo user studies that reveal fascinating trends for mobile music.

The workshop will take place at the University of Sussex in Brighton, UK. Brighton is situated on the British 'Sunshine Coast' and easily accessible: only 30 minutes from London/Gatwick airport and 60 minutes from central London.

Don't miss this chance to help shape the mobile music landscape of the future!

CALL FOR PARTICIPANTS

We invite practitioners, artists, designers and researchers from all areas, including music, technology development, new media, sound-art, music distribution, locative media and industry to register for this international mobile music workshop.

CALL FOR WORK-IN-PROGRESS

Are you working on a mobile music project and looking for feedback from like-minded people to help you to move on with your ideas? We invite submissions of work-in-progress projects exploring the topic of mobile music. Projects will be discussed, receive critical review as well as support with ongoing problems and issues. Your work should not be completed yet, but either be on-going or just about to get started. Potential projects could include but are not limited to mobile music systems or enabling technologies, interface design, on-going or planned user studies, ethnographic fieldwork, art pieces and other areas relevant to mobile music.

Submissions should include a presentation of the project, explain its relevance to the field of mobile music and describe issues and problems that could be discussed during the workshop. Please include a short biography with the submission. Accepted project authors will be given time to present and discuss their work and will receive feedback by smaller groups of workshop participants including specialists in the field. Authors are encouraged to bring material and prototypes to the workshop.

CALL FOR MOBILE PLATFORMS AND SYSTEMS

In addition to the presentations, discussions and project feedback sessions the workshop will also offer handson group activities to explore technological platforms. We are looking for mobile platforms, systems, installations, applications or devices that include music features or can be used for musical projects. The workshop participants will get hands-on experience with these platforms, so they should be suitable for groups of at least 8 people. This provides you with the opportunity to introduce your platform to experts and practitioners in the field of mobile music and to gain valuable feedback. We invite you to submit a platform description, explain how it can be used for mobile music and how larger groups can use it during the workshop.

Details are here.

Posted by jo at 10:33 AM | Comments (0)

September 14, 2005

Artistic Interfaces:

artikel1_lille.jpg

Between Instrument and Artwork

"At three separate venues at Ars Electronica 2005 in Linz, Thomas Petersen found examples of hybrids between artworks and artistic instruments, which explore the relations between sound and vision in physical and virtual space. In this article he offers a few perspectives on what we can learn from these artistic interfaces.

Creative Interactivity Required: Certain types of digital artifacts place themselves firmly in the border zone between instruments for artistic production and artworks. They are 'artistic interfaces' in both senses of the concept. On one hand, they constitute useful interfaces for users to create independent artistic expressions. On the other hand, these tools are artworks in their own right, which offer critical perspectives on the concept of the instrument itself and the dynamics of the creative process.

In the field of digital culture the difference between instrument building and art-making can often be quite subtle. In some cases there are not necessarily any structural differences between an interactive artwork and an interactive space for users to produce aesthetic experiences because they can share characteristics as interactivity, interchangeability, transformation, user-controlled graphics and sound etc. Of course many digital pieces are a far cry from resembling tools e.g. genres like hypertext fiction, point-and-click game environments etc. The defining characteristic of the artistic interface is a specific type of interactivity where the common denominator is a large number of reconfigurable elements, the total sum of which create a manipulable space into which the user can project ideas and expressions. This space depends totally on direct creative interaction from the user.

Each of the following pieces are frameworks of possibilities and limitations. They do not promise more complex or superior results than conventional or analogue instruments, but constitute specific explorations of the production of aesthetic results within certain boundaries. These pieces are critical approaches to the technologies of creative production and can be considered as complex artistic statements in themselves.

Drawing Sculptures of Sound: An immensely popular work at the Cyberarts exhibition at Ars Electronica was Amit Pitaru's Sonic Wire Sculptor, which received an Honorary Mention in the Interactive Art category. This piece attracted quite a crowd in a stuffy room at the O.K Centrum für Gegenwartskunst.

Sonic Wire Sculptor deals with the production of sound by means of three dimensional wire drawings. It explores the correlation between vision and sound as the user literally draws the sound with a pen at a drawing station. The drawings are inserted in a rotating 3d-space as spinning wire sculptures. As this virtual space rotates 360 degrees, the sculpture is interpreted by the computer and transformed into sound according to the initial speeds and positions of the drawings. The vertical axis determines the pitch, notes are indicated by horizontal lines and a metronome can be found at the centre of the stage. The user can save and retrieve the sculptures, stop the rotation and reposition the compositions freely to reconfigure the elements within a new tonal space. The installation of the piece in the exhibition space is based on surround sound so the sound in the physical space reflects the structure of the virtual sculpture.

Compared to many other similar projects the piece distinguishes itself by actually being playable as a performance instrument. It can be used as a precise musical instrument by means of the guidelines to achieve specific notes and precise rhythm. A practical implementation into a joint performance setting is definitely conceivable. The piece is a critical approach to the production and representation of sound at the same time. The wire sculptures are an original counterpart to the conventional note system and a dynamic equivalent to graphic notation systems like the ones used by composers like John Cage, Brian Eno etc.

A Playful Physical Sequencer: A similar project is Golan Levin's Scrapple: A physical active score table, which was presented at the Ars Electronica Center across the Nibelungen bridge. As with Pitaru's Sonic Wire Sculptor, Levin's piece Scrapple deals with the transformation of shapes into sounds. In this case the piece is based on the configuration of physical objects on a table, transforming them into a musical score.

Instead of placing samples in a sequencer or notes on paper, Scrapple adds a physical dimension to computer-generated music. The user has to move and adjust objects to indicate rhythm, tones, melody etc. The visual result quickly ends up as an abstract composition of shapes corresponding to sound. The strength of the piece is not so much being a precise musical instrument but rather a very enjoyable toy appealing to the playful interaction between several users at a time. It also adds a bodily aspect forcing the users to move swiftly around the table.

Other projects by Golan Levin touch upon similar aspects. In The Manual Input Sessions, hand gestures, finger movements and other objects are analyzed by custom software generating graphics and sounds. In Yellowtail, virtual worm-like creatures are drawn by the user on a screen and transformed into sound by a scanning mechanism.

Augmented Table Composition: Going back over the bridge to the 'Interface Culture' exhibition at Kunstuniveristät Linz (Art University of Linz) The reacTable* was exhibited next to a number of experimental interfaces. This particular piece was developed at the Music Technology Group, Pompeu Favra University, Barcelona.

The instrument consists of a round table and plastic objects marked with various symbols. Based on these symbols a camera tracks the position and direction of the objects when they are placed on the table. When one of the objects is placed, the particular symbol is recognized and the immediate environment is augmented with graphics indicating the functions of the element. Some of the objects produce sound and other objects modify these sounds. Some events can also affect the general visual appearance of the table. The total sum of events creates a dynamic, tangible interface with virtually unlimited possibilities in the creation and visualization of sound. The visual result is a stunning abstract animation generating buzzing, clicking and humming compositions.

The interface allows the composer to handle physical objects to generate digital sound supported by a full range of dynamic visualizations. As opposed to the widespread use of e.g. the laptop as a performance instrument, the strength of reacTable* is its tangibility and involvement of the user's body to manipulate digital sound. reacTable* has a social and telematic dimension as there is the possibility of involving distant participants at a shared table. The piece was used in a collaborative performance between Linz and the ICMC conference in Barcelona Spain. The two tables shared musical components and the composers could affect each other' s components in a shared performance situation.

Basic Research: The range of strategies shows us that the visualization/notation of sound is not an unequivocal project. The correlation between sound and vision is a magnet for pioneering instrument builders, simply because this space can never be definitively charted. Each instrument constitutes an interpretation of the ephemeral nature of sound and a longing to map it. The result is an abundance of complex analyses of aesthetic transformations and some very interesting artistic interfaces.

The sheer amount of different strategies of this kind points to the fact that the artistic instrument is under full-scale development these days. These experimental instruments can be regarded as basic research into alternative ways of creating and visualizing music. Because they take place as an experimental practice between art-making and instrument building they can establish interactive spaces which may not have the potential to survive on a commercial market. Certain aspects of these instruments will most likely find their way into commercial production and some of them will maybe become commonplace performance tools - some will not. Whatever their future destinies may be, they are nonetheless important as singular statements and they demonstrate the importance of experimental spaces to develop and test new approaches to artistic tools." [originally published by Thomas Petersen, artificial.dk]

Related links:

- Online version of Amit Pitaru's Sonic Wire Sculptor: http://www.pitaru.com/sonicWireSculptor/
- Amit Pitaru's website: http://www.pitaru.com
- Golan Levin's website: http://www.flong.com
- Yellowtail by Golan Levin: http://www.flong.com/yellowtail/index.html
- The Manual Input Sessions by Golan Levin: http://tmema.org/mis/
- reacTable*: http://www.iua.upf.es/mtg/reacTable/
- A list of tangible musical interfaces: http://www.iua.upf.es/mtg/reacTable/MusicTables/
- Ars Electronica website: http://www.aec.at

Posted by jo at 11:53 AM | Comments (0)

September 13, 2005

0kn0

okno.gif

Looking Glass

0kn0.org is a new interdisciplinary media center for art and technology in Brussels. 0kn0 supports the research, development, creation and presentation of new forms of innovative cultural production, and is dedicated to exposing broad and diverse audiences to new technologies and media arts through a programme of concerts, interactive installations, performances, workshops and lectures.

OKNO presents BO-KU-SHI-N-GU BA-GU [Electronic Boxing Bag Melody Making Machine], an interactive installation by SWUMMOQ.NET [Heerko van der Kooij/nl and Niels Wolf/de in collaboration with Travis Robertson/ca]. Punching the Bokushingu Bagu the visitor generates and composes melodies from a selection of various sounds depending on the power and area of the punch. The visitor applies great effort only to be hindered by the outcome of his intensity.

Opening performance by swummoq.net: friday 16 september 8pm-10pm or try it yourself on: Saturday 17 September 2pm-6pm; Sunday 18 September 2pm-6pm; Entrance free.

radio_okno is online now! click: http://qt.okno.be:8000/mix.m3u to open the livestream in your mediaplayer.

Masterclass on Synthetic Speech and Machinic Sound Poetry. Participants can still apply. The masterclass is scheduled from October 17-21 and is part of the x-med-k-2005/series, commissioned by the VAF [Vlaams Audiovisueel Fonds].

OKNO: koolmijnenkaai 30/34 quai aux charbonnages | brussels 1080 | belgium
tram 18 [walvis] | Metro Graaf van Vlaanderen - Comte de Flandres
okno is supported by the Ministry of the Flemisch Community and the VGC.

Posted by jo at 11:41 AM | Comments (0)

September 02, 2005

Unusual Encounters

pic20485.jpg

Chance Encounters

Unusual Encounters is a three-part, web-based art project in which Jean Paul Gaultier has had the pleasure of participating and which is a mixture of creative sophistication and the latest technology, unique in the world, viewable at this address. Designed by the film director David Mileikowsky, under the artistic direction of Jean Paul Gaultier, the Unusual Encounters site proposes 3 original modules, offering 3 unique meeting experiences:

On September 9th, an on-line artistic improvisation will be organised for 500 trendsetters around the world and we have the pleasure of inviting you to take part in this exclusive venue. On that day, "Zhang will meet Veng" and for the first time ever, the unusual encounter between two unrivalled virtuosos, Maxim Vengerov and Zhan Yimou, the Russian and the Chinese, the violinist and the film director: two meteors of the Modern Arts will be brought together online in a live artistic performance, somewhere between China and Germany. Some 9000 km apart, the amazing duo will perform on-line, in unison. The first showing is scheduled for September 9th, at 12:00 exactly (Paris time).

You might also like to try Tango Painting, available as of today! For the first time ever, this multi-user graphic application allows user pairs to simultaneously draw up, from a distance and in an entertaining way, multimedia forms, regardless of their skill level. Their work can then be complemented by creations from artists such as Jean Paul Gaultier and Jean-Baptise Mondino, then shared with others on the web.

Finally, on September 10th, The Perfume Alchemist will be presented in an on-line world premiere. This visionary manga by Alain Escalle (director) tells a story of a child, a little prince of the streets, in a high-rise megalopolis, who decides to create the perfume of his dreams. Thanks to "Manga Maker", an on-line video editing platform, every user can participate in the adaptation of the course of the story and then pass the modified story on to his/her soul mate. An extract of this manga is available here.

Posted by jo at 10:25 AM | Comments (0)

August 30, 2005

TeleSon: Invention#8 by Chris Brown

reactable.jpg

Networked reacTables

TeleSon: Invention#8 is composed for four musicians playing two networked "reacTables", one each in Barcelona and in Linz, Austria (ICMC2005). The reacTable is a new electronic music instrument developed at the Music Technology Group in Barcelona by a research team headed by Sergi Jorda. Brown explains: 'I have interfaced my own synthesis engine, written in SuperCollider3, to the reacTable's visual and gestural interface, thus composing my own synthesis environment with its own rhythmic and tonal character. Like the other works in my "Inventions" series (recorded on Pogus CD "Talking Drum"), this piece involves polyrhythms, computer networks, and improvisation. Like "Invention #5" (1999), it also involves musicians collaborating at great physical distance from each other'. See Sounding the Net: Interview with Chris Brown by Peter Traub.

Chris Brown, composer, pianist, and electronic musician, creates music for acoustic instruments with interactive electronics, for computer networks, and for improvising ensembles. Recent recordings of his music are available on Tzadik, "ROGUE WAVE" (2005) and "LAVA" (1992); and on Pogus, "TALKING DRUM" (2001). He has also performed and recorded extensively with "The Hub", "Room", and the "Glenn Spearman Double Trio". As a pianist he has recorded music by Henry Cowell, Luc Ferrari, and John Coltrane; as an electronic musician, he has collaborated with John Zorn and Wadada Leo Smith. He teaches electronic music at Mills College in Oakland, where he is Co-Director of the Center for Contemporary Music (CCM).

Posted by jo at 11:24 AM | Comments (0)

Activated Spaces + Crawling through Network Cable

crawling.gif

Sounding Jesse Pearlman Karlsberg

"Abstract: Successful political sound art comes in a variety of forms. Interactive sound art can upend entrenched social relations around the consumption of media. Work articulating obscured spacial or social power dynamics can provoke deep thought and discussion. Work incorporating explicit documentary material can dovetail successfully with social movements, enlivening events, and serving as a mechanism for fundraising, awareness raising, or mobilization. These different tracks can be employed, singly, or in combination to make effective political art work for a variety of settings. This thesis follows one sound artists' path through the multiplicity of potentially rewarding approaches to creating artwork with political and social themes." From Activated Spaces by Jesse Pearlman Karlsberg.

Crawling through Network Cable is a framework for group improvisation, exploring the social conditions of live distance collaboration. Musicians and video artists are in two remote locations, connected by networked computers running iChat AV or other videoconferencing software. The performers improvise responding to other performers in their own location and the remote location, paying special attention to the features of improvising with physically remote collaborators.

As source material, performers in Crawling through Network Cable should start with the physical infrastructure that connects the two locations: copper network cable, projectors, computers, routers, firewalls.

Crawling through Network Cable was composed by Jesse Pearlman Karlsberg. Musical performance at Rensselaer by Nik Kanter (keyboards), Pauline Oliveros (concertina, harmonica, goat hooves), Jesse Pearlman Karlsberg (laptop). Musical performance at Wesleyan by Dave Kadden (oboe), Angela Opell (clarinet), Anne Rhodes (voice). Live video by Kathy High and Caterina De Re at Rensselaer and Will Swofford at Wesleyan.

A score for structured improvisation for video artists and musicians in two locations, connected by networked computers running videoconferencing software. View the performance score (PDF). Audio and video from Crawling through Network Cable will be posted in Fall 2005.

Posted by jo at 09:22 AM | Comments (0)

Hard Drive Orchestra

frued_drives02_150.jpg

Radio Taking Place Between the Ears and Behind the Eyes

Hard Drive Orchestra--by Fundamental Radio (August Black and Markus Seidl)--is a sculpture and instrument that can be programmed to run autonomously as a stand-alone installation or played live and intuitively in a performance taking control commands via OSC, midi, or some other input protocol.

The hardware component consists of a mobile computer containing 8-16 IDE and SCSI hard drives of various ages of computing history and therefore of various sonic properties. The mechanical sounds of the drives are recorded live with 8-16 separate contact microphones, cleaned by a noisegate, and amplified over loudspeakers.

The software component consists of two objects written in C for PureData. One object performs very specific read-write operations on the various hard disks in order to force the disk's head to move from the innermost part of a drive's platter to the outermost part and back again. This results in an audible grinding click. The second object reads the data being written on each disk and blits it to the video screen. Essentially, the first object allows one to control the sonic activity of each drive while the second object simply displays the data to screen.

With the Hard Drive Orchestra, Fundamental Radio can analyze and present the physical, mechanical, and most importantly, the aesthetic properties of stored information. The HDO has premiered with "The Physical Mechanism of Forgetting" at the Freud Museum in St. Petersburg [Bolshoi Prospekt, Petrogradskaya Storona, 18a] as part of Traumlandshaften festival from October 7-10, 2003. There, the successive versions of the ****soft operating system have been deleted, slowly, over a week cumulating into a live performance of found and submitted data.

Posted by jo at 09:08 AM | Comments (0)

August 28, 2005

Improbable Orchestra + Burning Man

side_double_shot_P_Mendez.jpg

Tribal Knob Twiddling in the Desert

The Improbable Orchestra (IO) is an interactive audio installation designed to make digital music accessible to anyone. It consists of a central control unit surrounded by four speakers designed for up to four users to each control a sound sample using manual controls (knobs and buttons).

This sound installation was designed by our team to accomplish the following goals: 1. to create a tool to allow any user to experiment with working and designing music; 2. to create a visually appealing structure that creates a musical space; and 3. to create a musical tool that allows interaction between different players while they experiment with music.

laserharpbig.jpg

Burning Man: Laser Harps, Improbable Orchestra Knob Boxes

For some of the most bizarre and unusual music-sound installation art, look no further than Burning Man [Related 1, 2, 3]. Burning, indeed: this desert-based event has in the past featured Eric Singer's Max/MSP-controlled pyrophone, a propane-powered flaming sound organ. (And, incidentally, that installation is making a repeat appearance this year.)

And what better activity when in the middle of the desert than tweaking knobs and producing strange electronic grooves? That's the idea behind improbable orchestra, an interactive table full o' knobs for collaborative soundmaking. Build one yourself: check out the copious design notes. Basic specs: the free Pure Data graphical multimedia development environment is running sounds, gutted Pentium PC with custom power supply, custom circuit board connecting the knobs and fiddly bits thorough a Parallax basic stamp board. (Lots more specs on their site, missing only details of the Pd patch.)

"But," says you, "I hate knobs. Give me lasers, man." Sure! You obviously want the interactive Aeolian laser harp, which suspends a series of laser beams through which you can walk to trigger sound. It's the creation of former NYU ITP faculty member Jen Lewin, now based in Colorado and doing interactive sculpture full-time. She has other fantastic projects like interactive butterflies. [blogged by Peter Kirn on Related.

Posted by jo at 10:22 AM | Comments (0)

August 13, 2005

The Handydandy

aaahandy.jpg

Rocking the stage with mobile phones

The Handydandy consists of five media artists from Austria (Bauch Bernhard, Gross Luc, Kirisits Nicolaj, Savicic Gordan, Waldner Florian) using their mobile phones as musical instruments. The mobile phones are used only as interfaces and they are connected, via Bluetooth, to a computer network.

"The entire instrument played by the musicians, is thus divided into the mobile phones, the Bluetooth connections and the laptops acting together over WLan. Thereby different Feedback systems on social and digital level, which are used for the compositions, develop. The selection of this configuration makes possible to use not only the movement in space as temporally akusmatic category but also to connect the powerful aesthetics of a Rock performance with the intellectual requirement of the electronic music. The Handydandy is at the same time a RocknRollband and a computer network - music group." Video. [blogged by Regine on we-make-money-not]

Posted by jo at 08:54 AM | Comments (0)

August 01, 2005

Ruccas.org

ruccas.gif

As Good as the Community Makes It

Today, Ruccas.org celebrates the end of its first year of operation, and the start of the next. A year ago, ruccas.org launched with a handful of artists. Today, it features the works of over seventy artists and collaborative projects, nearly all of which are available under a Creative Commons license.

Its growth and success are due entirely to the community, who have taken advantage of how easy it is to add their own page and share their work with one of the largest online audiences for unconventional electronic art. Ruccas.org's allowing of anyone to edit the site and add their own pages, as well as its offering of free hosting for aural, visual, and software art, has enabled this to happen. The ruccas.org philosophy is that the best way to create a site of this type is to give the community as much control as possible, and artists from all around the world have stepped up and proven that our philosophy is correct.

In the coming year, I personally hope that the site will continue to grow at the rapid rate it has been. Ruccas.org is a unique hub in the sense that is it only as good as the community makes it. Therefore, I urge all of you out there who are are trying to do something "new" to come share your artwork with us, and to encourage others to do the same. I know that I speak for the entire ruccas.org community when I say that we're looking forward to the next year of new additions, new directions for the site, and new community members.

- John Nowak, Ruccas.org

Posted by jo at 08:20 AM | Comments (0)

July 19, 2005

Maywa Denki in Paris

switchedoncappa-s.gif

NAKI, TSUKUBA, and EDELWEISS

Maywa Denki is performing at Quartier d'ete Festival in Paris on July 21,22 and 23. This art unit, which is produced by Nobumichi Tosa, has been inventing a number of hard-to-describe electric devices by transforming the history and culture of small electrics manufacturers in Japan's high-growth period (about 40 years ago) into three product lines of interactive art objects: NAKI, TSUKUBA, and EDELWEISS.

They've "manufactured" various kinds of musical instruments including Pachimoku, which Nobumichi Tosa wears in the photo, and Koi-Beat, which could be interpreted as "love beat" as well as "carp beat."

Pachimoku is operated by finger snapping motion, which is sensed and transmitted to beater bars of the two Mokugyo's (wodden drums used in Zen rituals) at the top, through rather thick wires, which would withstand high voltage (e.g., AC 100 Volts, commonly used for home appliances in Japan).

Koi-beat (see this page) is a carp-shaped rhythm machine with 16x4 monotonous square electric switches and it generates groovy rhythms at 100 Volts.

They play music with the various instruments they've invented – You'll see if you get a chance to go to that event in Paris. For the rest of us, hopefully they'll record the event and turn it into another "manufactured product" of theirs (in short, a CD or a DVD). [blogged by manekineko on we-make-money-not]

Posted by jo at 07:23 AM | Comments (0)

July 12, 2005

RGB

rgb2-3.jpg

Audience Participation in Musical Creation

RGB is an interactive musical installation for 2-9 users made by Tomas Dvorak (CZ), Alessandro Capozzo (IT) and Matous Godik (CZ). The main purpose of RGB is the idea of audience participation in the musical creation on the live performance. Communication between musicians and audience is made through specially designed flashlights. Each of them can emit 3 basic colours of the spectrum: red, green or blue. Other colours can be made by combining primary colours together.

The art content are the algorithms and schemes of the communication. Each algorithm is unique musical scheme. Every new step of the project is release of new concepts and algorithms. While first package called "Rgb 1" focused on the harmonical and tonal aspects of the musical communication, the new version "Rgb 2" focuses on more advanced sound and tembre aspects. Videos. [via]

Posted by jo at 11:05 AM | Comments (0)

July 08, 2005

Becoming-Still:

Impnotes.gif

Perspectives in Musical Ontology after Deleuze and Guattari

"...In a recording, even more so than in a performance, the intensities of a performance requires more deterritorializing impetus. In a performance, our expectational intensities are more and more transformed by intensities of performance simply by the musical space in which we find ourselves, insofar as there is fullness-to-explosion. In a recording, however, intensities, even expectational ones, are increasingly rooted, as we have suggested, as we come to “learn” the recording. In a performance, we are inevitably deterritorialized, at least initially, at whatever gradation our intensities of expectation at the onset. In a recording, however, we are inevitably handed a reterritorializing card. Recording is a recoding.

With a recording there is a tendency to not only recodify the music, but to recodify ourselves in the process. If we can acknowledge that each time we listen to this particular recorded performance of Varese’s “Ameriques” the music, through whatever intensities and discoveries enthusing us, is not the same, not phenomenologically-in-itself, not the sum of its sounds (in essence, potentially decoding), then the same must be true for ourselves. We bring the difference and singularity of our lives to the operating table each and every time we experience even that same (i.e., recorded) performance of “Ameriques.” This speaks of us more generally that we experience any music at every turn improvisationally. This speaks of music that it is essentially deterritorializing..." From Becoming-Still: Perspectives in Musical Ontology after Deleuze and Guattari by Michael Szekely, The Improvisor: The International Journal on Free Improvisation [via anne]

Posted by jo at 02:34 PM | Comments (0)

July 04, 2005

Music Insects

insects_front-p.jpg

Tool for Visual Music Performance

Music Insects by Toshio Iwai (1992). Permanent collection at the Exploratorium, San Francisco, U.S.A. These "music insects" "react" to color dots on the screen. When they pass over the dots, they trigger musical scales, sounds, and different light patterns. The user selects colors from a palette with a track ball, and paints in their path, and the insects perform the colors when they pass over them. The insects' direction can be changed with certain colors, and colors can be painted to achieve less random musical "performances." This piece is a tool for visual music performance.

Posted by jo at 07:29 AM | Comments (0)

June 28, 2005

PitchWeb

pitchweb.gif

Jam this Wednesday

PitchWeb jam this Wednesday, June 29, from 6-8 pm EDT. The occasion is a book release party Routledge is giving for William Duckworth's "Virtual Music: How the Web Got Wired for Sound." They'll be playing as DJ Tamara and the Laptops. The plan is to make the party a "virtual" experience: the party goers have been invited to bring their laptops, and Nora will be weaving the online band into Tamara's house mix and a webcast.

Hope you can join them online at http://www.pitchweb.net/. All you have to do is click, sign on, join a group, and play along. As with the 12-hour May-Day PitchWeb jam, Wednesday's session will also become source material for the yearlong Deep Time: Songs for Servers project that we're beginning later this year.

0415966744.gif

About the Book

· Must-reading for all interested in the world of web-based music
· Highlights diverse artists from John Cage to Moby to Scanner
· Includes unique CD sampler highlighting the composers and works discussed in the book

Virtual Music: How the Web Got Wired for Sound is a personal story of how one composer has created new music on the web, a history of interactive music, and a guide for aspiring musicians who want to harness the new creative opportunities offered by web composing.

For Bill Duckworth, the journey began in 1996 when he developed the idea for an interactive webcast, named "Cathedral," which was developed over a period of 5 years. On its completion, "Cathedral" won numerous awards, including the ASCAP/Deems Taylor Award for composition, and has already inspired further experimentation.

But this is more than the story of one composer or one piece of music. The book traces the development of interactive music through the 20th century from Erik Satie through John Cage, Brian Eno, Moby, and Scanner. The technology itself is described as it has inspired experimentation by artists, including composers who have developed new ways to involve the audience in their music, plus possibilities for the non-musically trained to "play the Web." Challenges facing the web composer-from copyright issues to commercialization-are analyzed with new solutions suggested.

Virtual Music is a fascinating story that will appeal to fans of new music, creators, performers, and anyone interested in how technology is transforming the arts.

Posted by jo at 08:45 AM | Comments (0)

June 21, 2005

Sale Away

saleaway_11[1].jpg

Mechanical Orchestra with you as Conductor

In Sale Away, passers-by can conduct an "orchestra" of household devices via their mobile phones on a display window. The mechanical orchestra consists of flute, organ and brass playing vacuum cleaners, rattling kitchen mixers, buzzing ventilators, radio playing toy trains, wobbling jigsaws, dancing tumble dryers, humming refrigerators and other misused household utilities.

The conductor is a big refrigerator. This fridge is also the explanatory interface. To start the orchestra and wake up the shopping windows you have to dial the number and follow the commands displayed on the window. This call opens the door of the fridge, giving free the image of its explanatory interface, the "mobile phone robot person". The robot will explain and invite you to act. By pressing keys on the phone you can let all different instruments play along with the melody. You can set some single voices or the whole orchestra tutti.

Video. By Geert-Jan Hobijn (NL) and Carsten Stabenow (GER) Olaf Matthes (GER) from Staalplaat. Via the excellent Culture Base Open List. [blogged by Regine on near near future]

Posted by jo at 08:59 AM | Comments (0)

June 08, 2005

Virtual Musical Instruments:

BodySuit3.gif

Technological Aspects and their Interactive Performance Issues

"ABSTRACT: I have been creating various Gestural Interfaces (*1) for use of my compositions in the project of Virtual Musical Instruments (*2). These Virtual Musical Instruments do not merely refer to the physical instruments, but also involves Sound Synthesis (*3) programming and Interactive Video (*4) to a great extent. Using the Virtual Musical Instruments, numerous compositions and performances were experimented. This paper is intended to report my experiences, as well as their development, instead of being merely abstract theory. This also contains the issues of performance and the problem of the notion of interactivity." Read Virtual Musical Instruments: Technological Aspects and their Interactive Performance Issues by Suguru Goto at HZ Journal.

Posted by jo at 08:28 AM | Comments (0)

June 03, 2005

Ange

ribcagge.jpg

Musical Ribcage

Ange, designed by Danielle Wilde, is a body mounted series of "ribs" with electronic sensors that enable a user to play different sounds, including breathy notes, a gong, rushing water, drums and an oboe. The notes respond differently to pressures. The idea is to use the body as an interface and allow the user to metaphorically touch and "play" the body of the wearer. The design is inspired by an 18th century engraving, Ange Anatomic by Jacques Fabien Gautier d’Agoty. [via near near future]

Posted by jo at 09:22 AM | Comments (0)

June 02, 2005

Contemporary Music Review: Call for papers

07494467.jpg

The Body/Technology–Instrument/Technology Paradigm

We invite papers from writers reflecting on sound in relation to the body/technology; from anyone who attempts to define and/or critically examine threshold conditions of instrument and performer; writers who are keen to challenge con- or discontinuities of instrument and performer, be it in the form of laptop improvisation, interactive sound environments, 3d cave installations, sonic architecture, or other performance situations.

In an era in which technology has impacted immensely on ways in which instrumental music is being performed, indeed in which the role of the instrument, if not notions of the instrument itself, have become radically altered, we want to examine the instrument’s position as well as the role of the performer.

The body/technology – instrument/technology coupling or decoupling is something that can be, and needs to be explored from various angles. Such discussion invariably leads us to think about what we consider an instrument to be in technologically mediated environments, in particular if we keep in mind that the word’s origin suggests notions of “instruction” (from Latin: instruere): who instructs or who/what is instructed by whom?

• Do we see the instrument as extension, retraction or subversion of the body?

• Is the instrument one that brings the body into existence, or one that denies bodily existence?

• What are the implications of technological environments on the instrument-performer relation?

• Does the body become re-figured and re-inscribed by technology?

These are only a few of the questions that may arise; other exploratory paths that elucidate aspects of the body/technology–instrument/technology paradigms are highly encouraged.

For this discussion papers may rejoice in the liminal, the erotic, or the incestuous, they may celebrate touch, if not the self-touch, and may move beyond notions of the instrument as extension of the body in performance.

Papers from disciplines as varied as music, philosophy, anthropology, sociology, biology, informatics or architecture will be of interest.

Abstracts of around 300 words should be sent to the guest editor by the 31st of July 2005.

Full papers will be due at the end of September 2005.

Please note: Papers are accepted only in English.

Submission of a paper to this journal will be taken to imply that it represents original work not previously published, that it is not being considered elsewhere for publication, and that if accepted for publication it will not be published elsewhere in the same form, in any language, without the consent of the editors and publishers.

All queries should be directed to: Franziska Schroeder, Guest Editor, Contemporary Music Review: Email: franziska[at]lautnet.net

Posted by jo at 07:15 AM | Comments (0)

June 01, 2005

Michel Waisvisz at CHI2005

waisvisz_c.jpg

Physical Touch and Electronic Instruments

Michel Waisvisz is known for his highly physical, sensitive, and ecstatic electronic music performance using The Hands, a gestural sensor instrument that he developed at the STEIM foundation in Amsterdam. Waisvisz has since the late sixties developed new ways to achieve physical touch with electronic music instruments; sometimes literally touching the electricity inside the instruments and thereby becoming a thinking component of the machine. He was amongst the first to play with synthesizers on stage, and very early developed and performed using what are now called gestural controllers. He also is the inventor of the CrackleBox and The Web and many other instruments based on touch interaction.

Besides solo performances and composing for music theatre, he has collaborated with a great variety of musicians/composers: Laurie Anderson, Steve Lacy, DJ Spooky, Najib Cheradi, Moniek Toebosch, The Nieuw Ensemble, Willem Breuker, The San Francisco Symphony Orchestra, Maarten Altena, etc. Waisvisz is the co-founder of electrical sound festivals in Holland.

The following are unanswered questions posed to Mr. Waisvisz at the ACM CHI 2005 conference in Portland Oregon. It is our hope that Mr. Waisvisz will have an opportunity to answer a few of them for us. [blogged by Connor on HML.Blog]

Posted by jo at 07:24 AM | Comments (0)

May 30, 2005

Malleable Music

malleable.jpg

Moulding Music in Your Hands

Malleable Music is a project by Sidney Fels at the Human Communication Technologies Laboratory at the Unversity of British Columbia in Canada. "The malleable surface touch interface combines a deformable input surface and video processing to provide a whole-hand interface that exhibits many attributes of conventional touch interfaces, such as multi-point and pressure sensitivity. This interface also offer passive haptic feedback, which can be effective with applications such as sculpting or massage. (…) This interface allows for people to control the computer using pinching, twisting, squeezing and other forms of whole hand manipulation. We want to use this device to create a new musical instrument." [blogged by nicolas on pasta and vinegar]

Posted by jo at 12:54 PM | Comments (0)

May 17, 2005

Distributed Immersive Performance

comm_imersinet1.jpg

Real-time, Multi-Site Performance

The Integrated Media System Center (University of Southern California) is working on the architecture, technology and experimental applications of a real-time, multi-site, distributed, interactive and collaborative environment called Distributed Immersive Performance (DIP). The objective of DIP is to develop the technology for live, interactive musical performances in which the participants - subsets of musicians, the conductor and the audience - are in different physical locations and are interconnected by very high fidelity multichannel audio and video links. DIP is a specific realization of broader immersive technology - the creation of the complete aural and visual ambience that places a person or a group of people in a virtual space where they can experience events occurring at a remote site or communicate naturally regardless of their location.

Posted by jo at 10:03 AM | Comments (0)

May 12, 2005

Audio d-touch

drum2.gif

Building Blocks for Collaboration

Audio d-touch, by Enrico Costanza is a set of 3 tangible interface applications for music composition and performance: the Augmented Stave, the Tangible Drum Machine and the Physical Sequencer. It uses a consumer-grade web camera and customizable block objects to provide an interactive tangible interface for a variety of time based musical tasks such as sequencing, drum editing and collaborative composition. Three instruments are presented here. Future applications of the interface are also considered. It was developed with Simon Shelley at the University of York. Informal observations revealed that the mapping strategy employed make the interfaces very intuitive and simple to use. Read more [PDF]

Posted by jo at 10:53 AM | Comments (0)

May 09, 2005

Pandemonium

1115657415pandemoniumESP.gif

A Percussive Site Work

Tip tap tip tap. Is that the sound of dripping or is it someone in a cell tapping a code on the wall? Now there are many more tapping sounds. Far and near. Loud and soft. Now someone is banging on a pipe, now a cupboard. Now the hall is filled with a cacophony of beats, working their way back and forth, a PANDEMONIUM of percussion.

Using the existing elements in the prison cells Janet Cardiff and George Bures Miller have made the entire Cellblock Seven into a giant musical instrument, producing a percussive site work. This instrument, controlled by a computer and midi system, is made up of one hundred and twenty separate beaters hitting disparate objects such as toilet bowls, light fixtures and bedside tables found within the prison cells. The composition begins subtly as if two prisoners are trying to communicate and then moves through an abstract soundscape and lively dance beats until it reaches a riot-like crescendo.

Cardiff and Miller were born in Canada, and currently live and work in Berlin. Their signature audio and video walks have been presented in many cities, including New York, London, Rome, Pittsburgh, Berlin, Muenster and San Francisco. In 2001 they won a Premio Prize for their installation, The Paradise Institute at the Venice Biennale.

The massive Eastern State Penitentiary was once the most famous and expensive prison in the world. Its gothic, castle-like towers stood as a grim warning to lawbreakers in the young United States. This was the world’s first true “penitentiary,” a prison intended to inspire profound regret – or penitence—in the hearts of criminals. The influential design featured cellblocks extending like the spokes of a wheel; each inmate lived in solitary confinement in a vaulted sky-lit cell. The prison itself had running water and central heat before the White House, and once held many of America’s most notorious criminals, including bank robber “Slick Willie” Sutton and Al Capone.

Eastern State closed in 1971. The prison stands today in ruin, a haunting world of crumbling cellblocks and a place of surprising beauty. Today the prison is among the most popular visitor attractions in Philadelphia, hosting exhibits on both prison history and artist installations. Eight other artist installations will be on view during the 2005 season.

Cardiff and Miller will present Pandemonium in Cell Block Seven, a massive, cathedral-like, two-story wing completed in 1836. It has never been open to the public, and has been stabilized especially for this exhibition. The installation will open to the public on May 12, 2005 and will remain on view through November.

Pandemonium is the second installation at Eastern State organized by Philadelphia-based independent curator Julie Courtney. She co-curated Prison Sentences: The Prison as Site/The Prison as Subject, fourteen site-specific installations by twenty artists in 1995.

The public can find more information about upcoming events and the publication for Pandemonium at http://www.easternstate.org or (215) 236-3300.

Pandemonium has been funded by the Philadelphia Exhibitions Initiative, funded by the Pew Charitable Trusts, administered by the University of the Arts, Philadelphia; The Pew Fellowships in the Arts; The National Endowment for the Arts; and the LEF Foundation. [via e-flux]

Posted by jo at 05:10 PM | Comments (0)

Liquid Space

liquid-space_01.gif

3D Navigable Platform Exhibition and Workshops

LAb[au] is happy to invite you to Liquid Space 01+02 exhibition + lqs03 workshop taking place at Brakke Grond, Amsterdam_ the Netherlands. Liquid Space is a series of artistic workshops LAb[au] is setting up with different cultural institutions to design spatial audiovisuals with a specific focus on collaborative and shared processes resulting in installations, exhibitions and performances.

Here, the space navigable music platform--a 3D engine developed by LAb[au]--is proposed as the starting-point for development and exchange to the invited artists. The engine is based on the principle of integrating different media in a structural, programmed manner, inside and through electronic space navigation. An environment where the performer navigates his created 3D space to compose music in real time, displayed in a 360° projection space and a quadra-phonic sound system.

..lqs01: deSIGNforms _ Nabi Art Center,Seoul
..lqs02: deSIGNing by numbers MediaRuimte, Brussels
..lqs03: deSIGNing feedback loop systems _ Brakke Grond, Amsterdam

..04.05 - 14.05.05: liquid space 03 - workshop
..10.05 _ 20.00 h: liquid space 01+02 exhibition opening + presentation
..11.05 - 14.05 _ 10.00 - 24.00 h: exhibition *
..14.05 _ 20.00 h: liquid space 03 closing event, performance

LAb[au] + Eavesdropper, Els Viaene, Petersonic invited by Brakke Grond to perform Exploring the Room in the context of the Liquid Space 03 deSIGNing feedback.loop systems workshop theme and as an opening-event of the Liquid Space 01+02 - exhibition.

Exploring the Room is a performance where music, best defined by the practice of soundscaping, and real time generated computer graphics stand on the same level. Establishing a constant dialog through its particular stage-design, sound and visuals are building the room, a 3.00 x 3.00 x 2.25 meters "cube" made out of projection screens and quadraphonic speaker setting, giving the minimal footprint able to host 3 musicians during 1.00 hour.

Performers and the audience are projecting shadows on the screen-walls, the system is capturing this image and reintroduce it as an overlaid projected image, closing the loop. All acting then is a matter of balance in between black and white, light and shadow, sound and silence, one and zero.

Posted by jo at 07:19 AM | Comments (0)

May 07, 2005

Sonicforms

homephoto.gif

First Open Source Interactive Table

More from Cybersonica. Chris O'Shea presented another table to add to the list of musical tables. This one has a peculiarity though: it's the first to be open source. Sonicforms is an open source research platform developed to improve the research on tangible interfaces for audio visual environments by creating a community knowledge base and open tools for production.

Sonicforms is a repository for others to learn how to make their own interfaces and share their experiences; a set of tools for extending open source software to create these projects and of course it's also a physical installation. Video. SonicForms will be presented at OFFF in Barcelona, on May 13, 5.30pm. Open Room. The first showing of the installation will be at Submerge, 29th June - 1st July, Bristol UK. [blogged by Regine on near near future]

Posted by jo at 11:56 AM | Comments (0)

April 26, 2005

Video Conferencing Software as a Performance Medium

greshem.gif

Is There No There There?

"ABSTRACT: This paper surveys past performances in which the author collaborated with several other dancers, musicians, and media artists to present synchronized co-located performances at two or more sites. This work grew out of the author's participation in the landmark computer music ensemble, "the HUB". Each of the various performances were made possible by an evolving array of video conferencing hardware and software. These will be discussed. The problems and interesting side effects presented by latency and dropouts are a unique part of this performance practice. Leveraging the concepts of shared space, video and audio feedback generate evolving forms created by the combinations of the space, sounds and movements of the participants. The ubiquity of broadband Internet connections and the integration and constant improvement of video conferencing software in modern operating systems, makes this unique mode of performance and essential area of research and development in new media performance." From Video Conferencing Software as a Performance Medium by Scot Gresham-Lancaster.

Posted by jo at 10:31 AM | Comments (0)

The HUB

hub260.gif

One Instrument, Many Parts

The HUB, a group of composer/performers, were among the first to practice network music. The Hub is a computer network band. Six individual composer/performers connect separate computer-controlled music synthesizers into a network. Individual composers design pieces for the network, in most cases just specifying the nature of the data which is to be exchanged between players in the piece, but leaving implementation details to the individual players, and leaving the actual sequence of music to the emergent behavior of the network.

Each player writes a computer program which make musical decisions in keeping with the character of the piece, in response to messages from the other computers in the network and control actions of the player himself. The result is a kind of enhanced improvisation, wherein players and computers share the responsibility for the music's evolution, with no one able to determine the exact outcome, but everyone having influence in setting the direction.

The Javanese think of their gamelan orchestras as being one musical instrument with many parts; this is probably also a good way to think of The Hub ensemble, with all its many computers and synthesizers interconnected to form one complex musical instrument. In essence, each piece is a reconfiguration of this network into a new instrument. (Perkis)

The Hub, formed in 1984 in and around San Francisco, by Tim Perkis, John Bischoff, Scot Gresham-Lancaster, Phil Stone, Chris Brown, and Mark Trayle, used a network approach to improvisation, where each player controlled an aspect of the same compostion.

Posted by jo at 09:30 AM | Comments (0)

April 25, 2005

Location33

movvvvie.gif

Envisioning Post iPodalyptic Mobile Music

Location33 investigates the potential for new types of music made possible by location tracking and wireless technologies. Listeners, with a GPS enabled PDA or mobile phone, walk around downtown Culver City, California and create a musical album that merges the traditional model of the song cycle with interactive narrative, location awareness, and game play.

Twenty nodes throughout the Culver City area act as portals into the world of the album. Each node is linked with a fragment of a song and when a player approaches one of the portals the music file is streamed to their device.

Each day a specific song is active. It is composed of fragments representing a verse, a chorus, or a bridge. As players walk around, they piece together the fragments, and develop the sense of their path being a song. Each day yields a different song, and all of the components for each day are composed within the limits of a singular musical idea or theme. The album is generated over the course of the week.

The album also focuses on a particular story element, which is sung by one of the characters, Mack, Mackbot, or the Narrator. It is up to the player to discover these musical story fragments and piece them together to form a cohesive story. In particular, the Narrator’s purpose is to contextualize the album within the setting of Culver City and to help the player with hints and instructions.

Location33 also adds elements of game play, asking the player to move around the space to find story fragments in a type of musical and narrative scavenger hunt.

Besides, players can alter the temporal structure of the album, performing time travel operations to experience the album without having to be in Culver City on each day of the week.

Location33 is the thesis project of William Carter, student in Interactive Media at the USC School of cinema and TV.

Read the PDF and doc presentations to see how it works. [blogged by Regine on near near future]

Posted by jo at 12:17 PM | Comments (0)

April 18, 2005

Music using Games

gamemusic.gif

Game Engines to Performance Environments

"Following on from my research into controllers as instruments I have been looking at artists and programmers that have turned existing game engines info performance and composition environments. If you know of any more, please post a comment and I will post them here.

Vladimir Todorovic over at tadar.game music uses the Unreal engine for this purpose. He also provides the tools to download and do this yourself, as well as some mp3 releases.

floatingpoint.gif

Andrew Garton in his audiovisual performance uses the unreal engine. Objects, which can be seen in this screen grab, have been assigned sounds which are set off when "nudged" by the player. Two to three levels are planned, each with their own set of sound objects that bounce from one end of the "performance space" to the other.




q3pd.gif

q3pd by delire and pix uses the Quake3 engine to synthesis sounds in Pure Data. Bot and player location, view angle, weapon state and local texture details are sent to PureData over a port connection. Any map can be used, but a map has been created specifically as a performance environment." [blogged by Chris from Pixelsumo]

Posted by jo at 12:31 PM | Comments (0)

April 14, 2005

Kings X Phone In

kingsx.gif

Musical Intervention

"During the day of Friday 5th August 1994 the telephone booth area behind the destination board at kings X British Rail station will be borrowed and used for a temporary cybercafe. It would be good to concentrate activity around 18:00 GMT, but play as you will." [see Heath Bunting's King's X Phone In]

"The calls "created a musical intervention that disrupted the daily routine of an urban transportation hub, as commuters circulating through the station chatted with strangers from around the world … public space was reconfigured aurally and socially."" From Heath Bunting's Kings Cross Phone-In and Lyotardian Disruptions by Jeff Lee

Posted by jo at 08:23 PM | Comments (0)

April 04, 2005

List of Music Tables

reacTable3[1].gif

Tangible Musical Interfaces

"This is overview on tangible musical interfaces is focussing on table-top controllers, such as music tables and audio building blocks, mostly chosen for their relation to the reacTable* project. It also includes few additional projects, which we think are somewhat closer to the concept of tangible interfaces rather than simple musical controllers." [via near near future]

Posted by jo at 08:10 AM | Comments (0)

Telephony

telephony.gif

Nokiatunes

Telephony--by Thomson & Craighead--allows gallery visitors to dial into a wall based grid of 42 Siemens mobile telephones, which in turn begin to call each other and create a piece of 'music.' Each phone has been individually programmed with a different ringtone, which played en-masse, create various harmonic layers all of which are based in some way on the popular and prevalent, NokiaTune. The more people who dial into the work (whether inside or beyond the gallery walls) the more complex and layered the audio becomes. A piece of anodyne 'elevator' musac also plays into the space as a kind of background layer, and is also an improvisation on Nokiatune. [via neural.it]

Posted by jo at 08:01 AM | Comments (0)

March 21, 2005

THREE CITIES

mort3.gif

MULTIMEDIA TELE-CONCERT

Featuring Morton Subotnick, David Rosenboom, Steina Vasulka with Leo Smith and J.B. Floyd. The three part evening began with a performance by Morton Subotnick, in New York, who played the Yamaha Disklavier in Santa Monica using finger controlled midi triggers. The second part of the evening was a bi-coastal tele-collaborative concert between David Rosenboom, Dean of the CalArts School of Music in Santa Monica, and pianist B. Floyd and trumpet player Leo Smith in New York City. In each city there were two Disklavier pianos, the one played by the local artist, the second one playing the notes activated by the pianist in the other city. The third part of the evening featured Seina Vasulka in Santa Fe playing a MIDI violin which controller laser videodisk players in both New York and Santa Monica.

As she played her violin in Santa Fe she controlled and selected sections of the videodisk showing her playing the same piece 20 years earlier. Also during the course of the evening we took the Santa Monica and New York audiences through a tour of Netscape-based World Wide Web sites on the Internet and discussed the implications of performing artists disseminating their work and working together through this medium. This is an example of many years of collaboration between ECI, Mort Subotnik, David Rosenboom, Mark Coniligo, and CalArts. This collaboration continues with the hope of showcasing at least one "State-Of-The-Art Tele-collaborative Music Performance a year working with the leading avant-garde musicians of our time. See additional Electronic Cafe projects from 1984-1998 here.

Posted by jo at 07:37 AM | Comments (0)

March 14, 2005

300 Square Mile Recording Studio

map2.gif

Teleconcert

In 300 Square Mile Recording Studio, Stanford University's student Mariachi Ensemble performed in a live, distributed recording session as part of the Fall AES Meeting in San Francsico. Three sites in the Bay Area (map) were connected for this professional audio "teleconcert" collaboration. The audio signals were transported with extremely low latency using research software and regional internet (CENIC).

Three musicians in each of the three sites performed together for a Halloween session on October 31, 2004. The main stage was at Genentech Hall on the new UCSF campus, and the two other trios were located at UCB (CNM) and Stanford (CCRMA). 300 Square Mile Recording Studio was a co-production of Chris Chafe, Jason Bryant and the SoundWIRE group at CCRMA.

Posted by jo at 07:19 AM | Comments (0)

March 07, 2005

QUINTET.NET

quintet.gif

Full-Fledged Multimedia Performance Environment

A new version of the critically-acclaimed network-performance Quintet.net has just been announced. Quintet.net, which was called by computer musician Ian Whalley a highlight of the 2003 ICMC, now features an elegant brushed-metal graphical user interface and many improvements under the hood, such as network jitter compensation.

With its Viewer add-on, the application is a full-fledged multimedia performance environment with real-time notation, microtonal playback capabilities and a suite of authoring tools, the Composition Development Kit. Quintet.net is based on Max/MSP/Jitter and is available for free for both the Macintosh OS X and Windows platforms.

The Hamburg Network Composers' Collective, founded in 2003, is a permanent ensemble for the performance of compositions written or transcribed for Quintet.net.

Quintet.net was featured in the 2004/05 issue on network music of the Neue Zeitschrift für Musik (with CD-ROM) and in the current, February 2005 issue of Leonardo Journal.

More information is provided at http://www.quintet.net (the site was just updated and features background information on several international Quintet.net projects as well as network music performance in general).

Posted by jo at 08:00 AM | Comments (0)

February 23, 2005

Seine hohle Form

ii-265x310.gif

Collaborating on Interactive Performance Works

Abstract: Composers and choreographers face unique and largely unexplored problems as they collaborate on interactive performance works. Not the least of these problems is settling on schemes for mapping the various parameters of human movement to those possible in the world of sound. The authors' collaborative piece, Seine hohle Form, is used as a case study in the development of effective mapping strategies, focusing on dance gesture to real-time music synthesis. Perceptual correlation of these mapping strategies is stressed, albeit through varying levels of abstraction. Read Seine hohle Form: Artistic Collaboration in an Interactive Dance and Music Performance Environment by Joseph Butch Rovan, Robert Wechsler and Frieder Weiß, Crossings: Electronic Journal of Art and Technology, Issue 1.2

Posted by jo at 03:38 PM | Comments (0)

January 20, 2005

the Electronic Guy

electronicguy.gif

Performing Electro-Acoustic Clothes

Benoit Maubrey performs the Electronic Guy (guitar, sampler, electroacoustic Jacket, solar radio); Video In Studios, Vancouver February 14th, 8:00 PM; and Location 1, NYC, February 18th, 8:00 PM. Following the performance will be a video and slide presentation concerning Die Audio Gruppe.

Benoît Maubrey is the director of Die Audio Gruppe, a Berlin-based art group that builds electro-acoustic clothes and performs in them. Equipped with movement and light sensors, radio receivers, sound generators, samplers, amplifiers, loudspeakers - the clothes produce sounds by interacting with the environment and in response to the performers' movements (past examples: AUDIO BALLERINAS, AUDIO GEISHAS, AUDIO STEELWORKERS, BONG BOYS, AUDIO PEACOCKS...).

Stay tuned for Phonomaniacs (see web site): Maubrey's visit is the opening movement in a larger cooperative project between Die Audio Gruppe and local artists being planned for later this year. It will consist of a series of workshops followed by a sound event featuring 20-30 electro-acoustically costumed performers.

Posted by jo at 09:56 AM | Comments (0)

January 19, 2005

F2T (Free to Talk?)

30624.gif

SMS Rapper Slang

F2T is an interactive musical installation about "argot" and its different contemporary applications. F2T, a creation by artists Frank Plant (United States) and Thomas Charveriat (France), combines sculpture, robotics, hip hop, and SMS messaging to explore the ways technology shapes the development of language, particularly new forms of slang. Viewers interact directly with the artwork by sending it a short text message from their mobile. Once received, the SMS is scanned for frequently used words and, when a match is found, the four elements are activated. The main piece, "Rapper" begins to twist and wave his hand while mouthing a rap based on the message sent, the new lyrics culled from more than 130 different hip hop phrases written by Amsterdam-based lyricist and composer Jim Barnard. The rest of the piece is simultaneously set into motion: "Boom Box" starts flashing and blasting out the song while "Joy Ride" a bouncing low-rider, and "Shake Ass" (looks like it sounds) begins to move, triggered by ambient sound sensors. When the song is finished, a thermal printer spits out a souvenir with the original SMS and the words of the "Rapper." (via Rhizome)

Posted by jo at 02:50 PM | Comments (0)

January 16, 2005

Suspending Disbelief

im6[1].jpg

Stretching Conversations

Suspending Disbelief, by Anne Niemetz , is a performance in which four persons create music by stretching their suspenders. The suspenders have built-in bend sensors that trigger Larry King statements taken from his tv-show (King was known for his habit of "suspender-stretching.") The performers have the possibility to select, scratch and replay king's voice, in much the same way a dj can while playing records. The choice of sampled words and sentences lets the performers converse with each other about suspenders, showbusiness and the current social and political situation. (Posted by Régine Debatty)

Posted by Regine at 03:15 AM | Comments (0)

January 12, 2005

Ground Station

sdn_icon2.jpg

military data makes music

Daniel Jolliffe and Jocelyn Robert's installation Ground Station sets out to make visible different cultural approaches to technology, and to make audible the invisible data that connects these approaches. In this sense Ground Station's performance is not one of music but an aural translation of the current technologized society that we inhabit.

Ground Station produces music in real time by following the current azimuth, elevation and signal strength of twenty-seven Global Positioning System (GPS) satellites. Ground Station (GS) is in a sense an audible reflection of the activities of the GPS network it watches. Whereas GPS was developed as a positioning technology to aid in warfare, GS inverts the traditional use of this data by watching the positions and movement of the satellites themselves. As GPS data is fed into GS, it is processed by an algorithm designed by the artists that filters and transcodes this into musical notation. This unique, continuous musical score is then played live on a Dysklavier piano.

Ground Station works by, in effect, 'borrowing' data from the US military's Global positioning satellite network. There are two sets of authors for the music GS produces: the artist-programmers who create and contextualize the work, and the military infrastructure that maintains and oversees the GPS network. This connection between these authors is a definite one: GS is compositionally dependent on the data it receives from the GPS network, and in turn on the ground system that controls the satellites. The role of Schriever Air Force Base in controlling the music produced by GS is indirect yet significant, as the music produced by GS depends on satellite trajectory, which is under direct military control. Without this ground control, the music produced by GS would eventually fade and cease, in parallel with the decay of the satellites themselves.

Systems aside, GS was not conceived to champion technology or the possibilities of computer-based musical composition. As collaborators we have little interest in the aesthetics of the 'music' produced by the piano. Rather, its goal is to produce music as a kind of cultural artifact of the time and place we live in. Musically, GS relies on the supposition that musical composition is a product of the time and place in which it is produced, rather than its formal or syntactical qualities. A piano played under the rocket fire of wartime Beirut, for example, and GS's piano-manifested satellite data are both musical compositions that reflect specifically upon the social state of the cultures they are created within.

Posted by michelle at 02:40 AM | Comments (0)

January 09, 2005

Three by Thomas Charveriat

44_MachineManifesto01.gif

Animatronic Installations

The Machine Manifesto is an interactive musical performance that reflects upon the relationship between human beings and machines. It is composed of a series of mechanical musical instruments or automatons, and directed by an orchestral conductor. The instruments--programmed to perform independent tasks--follow the commands sent by the conductor via image recognition software. Machine Manifesto reflects the existing asymmetry between humans and machines (conductor and instruments), and provides an ironic look on the privileged position of machines in our society.

30225.gif

Light Activated Faces: A torch flashed at a square panel standing vertically on the floor, produces the sound of human onomatopoeias which vary depending on the area onto which the light is projected. Nine different grimaces are depicted on this photographic panel. Each facial expression becomes visible by means of a light box mechanism, whenever the flashlight points at it. A corresponding onomatopoeia is then produced by a sound sampler. The effect is a large human chorus conducted by the action of a flashlight.

30226.gif

Telephones: Artificial intelligence is a rapidly developing area of scientific research. Many jobs formerly held by humans are now being done by computers, often by "bots" (computer generated intelligent entities that present themselves as human in order to perform tasks for others). One field where this has become common is automated phone service. This piece illustrates the computer's potential by replacing live person on a sex phone line with voices created by a computer. In it, twelve different computer-generated personalities have intimate conversations.

Thomas Charveriat (Paris, 1974) lives and works in Barcelona, Spain. Studied photography at the School of Visual Arts(SVA)and afterwards obtained a master degree in sculpture from Columbia University. Both schools are located in New York City. After finishing his studies in New York, he moved to Barcelona, where he obtained a master degree in digital arts at the Pompeu Fabra University. He has won a number of scholarships and prizes. He has participated in collective art exhibitions in New York, Paris and Barcelona since 1998. At the moment, Thomas Charveriat creates animatronic installations with GPS, SMS, video, sound, electronic data and humor that interact with the viewer in a peculiar environment, where the complexity and elegance are combined to create sensorial ambiguity associated with an atmosphere of vulnerability and apprehension.

Posted by jo at 11:59 AM

January 05, 2005

The Crossroad

crossroad.jpg

Music at Traffic Lights

The Crossroad -, a quadrophonic installation created by Tomas Dvorak alias Floex, generates music from the colours of cars which meet on the crossroad. As cars pass by they create impulses which carry the information equal to music like tone and interval, rhytm, structure.

Cameras shoot and sample colours of the cars on 4 places before traffic lights. Visual information is sent to computer which analyzes the data.

The work is based also on prerecorded samples and loops composed by a program operating on data from the crossroad, so the artist doesn't create the final form of the composition, but he also doesn't let the computer create all the data randomly.

One of the key concepts of this work is that some kind of everyday phenomenon could operate artistic process. From this point of the view this work could be taken as a search for the codes of art in the ordinary reality. (Posted by Régine Debatty)

Posted by Regine at 02:18 AM | Comments (0)

December 27, 2004

netzwissenschaft

gred1.gif

Emerging Infrastructures of All (Inter)net Research

Dr. Reinhold Grether's network research | netzwissenschaft site maps the "emerging infrastructures of all (inter)net research endeavours. net.science as an anthropology of connectivity is trying to overcome the constraints of specialist method transfers on net matters. the protuberance of technical networks necessitates a professionalization of human net knowledge. neither the isolation of concepts as in basic research nor the encapsulation of processes as in applied sciences will ever be able to adequately describe the complex autopoiesis of networks. net.science is undoubtedly developing into a scienza nuova of its own right."

Check out his Mobile Art and Virtual Performance research areas.

Posted by jo at 04:45 PM | Comments (0)

December 13, 2004

Sky Orchestra

balloon_orch_logo.gif

Nocturnal Audio Landscapes

The Sky Orchestra project, by UK artist Luke Jerram--in collaboration with hot air balloon pilot Peter Dalby and composer Dan Jones--is an ongoing research project that explores how one can perceive an artistic experience while asleep. It is an experimental artwork bringing together performance and music to create visual audio installations within the air and within the mind.

Sky Orchestra are developing music specifically for sleeping people which is delivered out of the sky. Seven hot air balloons, each with speakers attached, take off at dawn to fly across a city. Each balloon plays a different element of the musical score creating a massive audio landscape. "Like whales calling in the ocean, the same sounds may be heard in quick succession passing from one balloon to another across the sky..."

Many hundreds of people experience the Sky Orchestra event live as the balloons fly over their homes at dawn. The airborne project is both a vast spectacular performance as well as an intimate, personal experience. The music is audible, both consciously and subconsciously, to all those in the balloon’s flight paths. After each performance leaflets are delivered to homes and shops along the flight path and responses collected by email. A radio phone-in, is also arranged. The feedback affects the on-going development of the composition. (via near near future)

Posted by jo at 11:00 AM | Comments (0)

December 10, 2004

OFFLINE

netjamm.gif

Net Jamm

Outside the frame of the exhibition and for the opening of Unzipping Codes, the net jamm performance will be led by the members of the group OFFLINE, Brian Mackern, Arcángel Consantini, and Fernando Llanos. They will mix live inputs from the internet provided by themselves or OFFLINE colleagues that will resemble a mixture of DJ and VJ procedures while revealing the unstable nature of digital reality.

OFFLINE is a cross-border global group of artist collaborators working creatively with the Internet and related digital and networked media. The purpose of OFFLINE is to bring together online art practitioners and creative technical consultants for collaborative projects staged at real world locales such as art and music festivals, symposiums, cafeterias, rooftops and city streets. OFFLINE projects will take the form of live performances, installations, and diverse experimental augmented reality displays. OFFLINE is organized via an online mailing list and a central server.

Posted by jo at 09:28 AM | Comments (0)

November 28, 2004

3 Internet2 Experiments

soundmesh.gif

Soundmesh: Design and OSX version: Mara Helmuth; RTmix Collaboration: Brad Garton; Linux version: Ivica Ico Bukvic. Soundmesh plays uncompressed audio data to remote host ips. It handles 44.1KHz au/sun/next or aiff files. It is a RTcmix-3.1.0 based application. You do not need to install cmix as all binaries are included. Because high bandwidth is required, you must be at an Internet2 site to participate.

soundwire.gif

SoundWIRE: This project is fundamentally concerned with two areas of research: streaming professional-quality audio to remote destinations using next-generation Internet, and developing practical, intuitive methods for assessment of network reliability. Currently, digital audio streaming across the internet is compromised by restricted bandwidth and buffering of audio data to safeguard against network irregularities. The results are signal compression and potentially long delay times.

l2.gif

Global Visual Music Project: Goals are to: a) develop software for the creation, mediation, and dissemination of real-time multimedia content, including high resolution two and three dimensional graphics, digital audio and video; b) develop a networking capability for this software, so that multimedia data could be shared between users in many locations; c) organize a high profile event to unveil these resources by staging a networked multiple site public performance with accomplished artists in established artistic and technological venues; d) create a web site to disseminate information about our research; e) freely distribute the software we create; and f) develop and publish a communication protocol for networked distribution of high quality real-time multi-media data.

Posted by jo at 04:05 PM | Comments (0)

November 25, 2004

laptop_orchestra

interna_grande.gif

Conducting Synaesthetic Intruments

laptop_orchestra is an interactive synaesthetic instrument designed for performances of audio visual compositions in real time. Fifteen laptops are arranged in rows similar to the formation of an orchestra. Algorithms activate sounds and abstract visual shapes based on the color spectrum; each laptop has its own sound and instructions for the construction of a shape.

Interacting from the orchestra conductor podium by lightly touching metal stems, it is possible to activate or deactivate each laptop, permitting the generation of an endless number of different compositions. Compared to the software of elementary audio and visual compositions, this installation puts the interaction in a defined and public space dimension, drawing a reactive environment in the contour of its physical dimension. Software made with Processing. Download pde file and sound sample.

Limiteazero is an architecture, media design and media art studio based in Milan, Italy.

Limiteazero’s areas of expertise include: network installations design, set design, exhibition design, media in architecture and public space, interaction design, digital identity design, sound design, creative-network-software design.

Limiteazero's experimental activity is concerning the exploration of alternative relations between man, machine and environment, bridging the physical and the digital. It tries to establish an emotional approach using interface systems, instead of a logical/deductive one.

The work around form is mainly focussed on investigating forms and images generated by mathematical processing and through random evolution processes. The "random" sequence is our most used coding expression.

Posted by jo at 12:07 PM | Comments (0)

October 28, 2004

Cell Phone Drum Machine

drum.jpg

Collaborative Rhythm Generator with an SMS Interface

Cell Phone Drum Machine is a rhythm generator that can be controlled by users with their phones by sending text messages. Here's how it works: the operator links his/her phone to the computer and starts the drum sequencer. Users can then send special SMS commands to the operator's phone that controls the rhythm. The on-screen display shows from which phones commands originated.

Matt Hall and John Watkinson will present the project at dorkbot-nyc on Wednesday, November 3rd, 7pm at Location One in SoHo.

Posted by jo at 10:08 AM | Comments (0)

October 27, 2004

ArtFutura

top_logo.jpg

Augmented Reality

ArtFutura's theme this year is Augmented Reality. It's taking place now in Barcelona [October 28th-31st]. The programme includes Howard "Smartmobs" Rheingold, Blast Theory who will perform Can You See Me Now?, the SimpleTEXT performance, Dublin’s MediaLab Europe and Montreal’s SAT will be showcasing installations and developing experimental projects, Richard Marks, creator of EYETOY, Greyworld, Fiona Raby, etc. (via)

bandeau_BCN_en_03.jpg

Saturday October 30, 2004 - 4PM - 8PM, Live from Barcelona, SAT presents Phéromones, the Art Futura's closing show. Phéromones is a network multichannel audiovisual techno performance regrouping musicians Alain Thibault, Physical Noise Theatre & Champion and VJ jocool , Ladyroll & Le couple in Barcelona as well as Yan Breuleux, Nuclear Ramjet & Johnny Ranger in Montreal.

Pheromones will connect by means of an optical fiber network of Internet 2 (providing data flows of up to 2 Gigabytes per second) the Mercat de les Flors at Barcelona and the head office of the SAT at Montreal in order to share music and images generated on both sides of the world, by means of an experimental technical support: eight audio channels and immersive multi-screens. R&D for clubbing culture.

From October 26 to 30, 2004
A team of 17 artists, researchers and technicians, as members of the Territoires ouverts - TOT - Open Territories consortium initiated by the SAT, are invited by the Art Futura festival to present their work in Barcelona.

Several of the tools developed at SAT, like the nSLAM multichannel audio application and pixelTANGO digital video mixer, will be put at contribution in order to show the know-how and the expertise of the developpers in the fields of IP streaming video and mulitichannel audio, IP telepresence (point-to-point and multi-point) and transmission of immersive audio-visual environments over IP networks.

The Open Territories project aims to foster the emergence of innovative forms of cultural expression by engaging the creativity of a new generation of artists and creators, in whose hands will be placed a host of advanced interface and networking technologies. The TOT project will be able to open the broadband network environment to talented creators who are ideally placed to illustrate broadband's potential as a means of avant-garde cultural expression.

Thursday October 28, 2004 - 10AM - 5PM
Friday October 29, 2004 - 10AM - 5PM
Saturday October 30, 2004 - 10AM - 4PM
SAT invites you and your friends to "cross the Atlantic" and project yourself at the beautiful Mercat de les Flors in Barcelona, site of the Art Futura Festival.

During the three days of the festival, using two Telepresence stations, people at both side of the Atlantic will be able to meet and speak as if they were sitting on the same table, with high audio and video resolution.

To participate and meet the catalan artistic community, Art Futura's visitors and the SAT team in Barcelone, come at the SAT Café each day between 10AM and 5PM (4PM Saturday).

Free entrance

Posted by jo at 02:00 PM | Comments (0)

October 21, 2004

Malleable Music

sony.gif

Social tech is performative

"...Malleable Mobile Music system is my idea of truly social (i.e. performative) mobile tech.

"'Historically, music was never meant to exist in isolation,' Tanaka says. 'There was always a physical, acoustical, and even social context. These kinds of technologies can add some of those elements back in to the listening experience'...As one participant naturally sways to the groove, the PDA's motion sensor detects his motion and shifts the tempo of the song. With the song's intensity building, another listener subconsciously grips her PDA tighter, introducing echo effects into the mix. The closer that listening partners move to each other, the more prominent their part in the song becomes. Meanwhile, the software applies various 'error correction' techniques to prevent an onslaught of arrhythmic noise, unless of course that's the goal. As they listen to it, the mobile music orchestra transforms the tune into a dubby, spacey version of the familiar Bjork song...Someday, malleable music may even become an art form in its own right, leading to a duet between the artist and the audience."

Check out the Ubicomp 2004 paper for more detail. Originally posted by Anne Galloway on purse lips square jaw.

Posted by jo at 11:43 AM

October 12, 2004

Glimmer

glimmer.gif

Scores of Light

Glimmer, by Jason Freeman, engages the concert audience as musical collaborators who do not just listen to the performance but actively shape it. Each audience member is given a battery-operated light stick which he or she turns on and off over the course of the piece. Computer software analyzes live video of the audience and sends instructions to each musician via multi-colored lights mounted on each player’s stand. The piece draws from a long history of interactive music, art, and cinema, but it also engages in current issues regarding the use of technology within orchestral concerts. Many orchestras are currently experimenting with interactive handheld devices and large video displays which attempt to explain the music being performed. But many such devices ultimately create barriers to engaged listening. Glimmer challenges audiences to become active collaborators in the live performance, rather than reinforcing their traditionally passive role. The American Composers Orchestra will perform Glimmer in its world premiere on Friday, January 21st, 2005, 7:30 p.m. at Zankel Hall at Carnegie Hall (New York).

The audience is divided into seven groups, each of which is linked to a group of three or four musicians in the orchestra. The percentage of audience members in a group with their light sticks turned on directly affects the dynamics, timbre, and tempo of the corresponding group of musicians. Additionally, a comparative analysis of audience groups regulates pitch choices, texture, and the overall prominence of each group within the orchestra. The more an audience group changes its lights, the more varied and prominent the music played by the corresponding musicians.

Musicians do not perform from a linear score in conventional musical notation. Instead, they respond to the light on their stand according to a series of instructions printed in their part. Different color families indicate which notes to play, the brightness of those colors indicate dynamics, and pulsations and flashes indicate rhythm and accents. Each player’s light is independently controlled directly by computer software. There is no conductor.

Glimmer is realized using off-the-shelf video equipment, DMX-controlled LED lighting, novelty battery-operated light sticks, and custom computer software written for Cycling ‘74’s Max/MSP/Jitter environment.

Glimmer was commissioned by the American Composers Orchestra, Steven Sloane, music director, Robert Beaser, artistic director, Dennis Russell Davies, conductor laureate.

Posted by jo at 09:37 AM | Comments (0)

October 07, 2004

The Wireless Invisible

issue15_graphic.jpg

Sound Tracks and Data Footprints

"Stalking the footfalls and echoes of the wireless invisible" by Tobias C. van Veen, Horizonzero, Issue 15: New Movements in Digital Music

When what is "useful" is defined in terms of its market, when GPS directions lead to the nearest McDonald's, then wireless art becomes yet another pervasive intrusion, a one-way dictation. The ephemeral terrain of wireless art in all its forms is a floating, multicast struggle where the century old battles of radio are being re-played out. Yet - this time with Open Source software and technologies; with artist-driven hardware development; with a programmer's technique and a hacker's verve; and, like a hive, with a collective approach to anticipating the enemy's moves on the global level. This machine aims to engage the participant in the game: encouraging the step of becoming a content-creator rather than just a passive receptor, disrupting the sender/receiver of communication, the fortress of static/noise, of what defines efficiency and usability. The target of producing ever new and publicly-oriented ways of engaging life through the tendrils of technology is in sight. This disruptive yet productive desire has been a persistent dream since the avant-garde encountered radio one hundred years ago. Read article.

Posted by jo at 08:36 AM | Comments (1)

October 06, 2004

Organised Sound: An International Journal of Music and Technology

music.gif

Networked Music

Interconnection has always been a fundamental principle of music, prompting experimental artists to explore the implications of linking their computers together long before the Internet reached the public consciousness. As the Internet achieved critical mass over the past decade, networking technology took centre stage as the key to a vast new territory of possibility, facilitating remote participation, distributed processing, and redefinition of musical space and time. The Web emerged as a virtual venue for countless musical purposes, and as analog acoustics transformed to digital representations, packets of data carried by IP from one address to another became a modern metaphor for air molecules transmitting the tone of vibrating body to eardrum.

As with any new technology, applications of networking to music have evolved from naïve proofs-of-concept to more sophisticated projects, and we stand now at a point when 'internetworking' is taken for granted, novelty is expiring and artistic goals more often transcend technical considerations. From this vantage, the essential question is not how networking and music are combined, but why. What is the unique experience that can be created? Whose role can be empowered or transformed: composer, performer, audience? Where can sound come alive that it couldn't otherwise? Networked music can reinterpret traditional perspectives on stagecraft, ensemble, improvisation, instrumentation, and collaboration, or enable otherwise impractical relationships between controllers, sensors, processors, inputs, and outputs. The network can be an interface, a medium, an amplifier, a microphone, a mirror, a conduit, a cloud, or a heartbeat.

The network is all of us. Music is the sound we make. Listen...

Call for Articles and Works

Volume 10, Number 3
Issue thematic title: Networked Music
Date of Publication: December 2005
Publishers: Cambridge University Press

We invite submissions from composers, performers, artists and researchers working in the realm of digital media and sound. Submissions related to the theme are encouraged; however, those that fall outside the scope of this theme are always welcome.

Issue Co-ordinators: Margaret Schedel [gem at schedel.net] and John P. Young [sound at netmuse.org]. This issue is being prepared in collaboration with the International Computer Music Association (ICMA).

The theme represents many avenues for discussion including, but not limited to:

Networked control interfaces (hardware/software)
Sensor arrays/interaction
Distributed/remote participation (composition, performance, reception)
Broadcasting/multicasting/streaming media
Virtual musical environments/venues
Aesthetics/philosophy of musical interconnection
Web-based music projects
OpenSoundControl
Distance learning/education
Online collaboration
Networked data sonification
Real-time remote sensing
Distributed processing
Networking for fault tolerance
Musical avatars/agents/bots
Emergent network phenomena/effects/behavior
Neural networks
Internet2
Alternative musical networks (RF, MIDI, WiFi, Bluetooth, etc.)
Strategies for mitigating network limitations (bandwidth, latency, etc.)

This issue continues the annual partnership between Organised Sound and the International Computer Music Association, with previous themes including "Performing with Technology" and "Collaboration and Intermedia." In exploring these prior areas, networking has emerged as a common element underlying a wide variety of innovative projects, prompting a more focused look at the mutual influence between networks and music. This should be no surprise in the electroacoustic field, where our machines are partners as much as tools, and working with other artists or often even solo requires connection between multiple machines. In the pre-network era, technical obstacles frequently dictated that much computer music occurred in relative isolation, with musicians expending precious attention acting as interpreters between hardware and other humans. So in one sense, networked music can be simply a recapitulation of acoustic music principles, of listening and sensitivity to other performers and audience, by enabling computers to participate equally in the musical conversation. Networking can also radically alter these traditional principles, most obviously by decoupling the spatial framework, enabling some or all of the participants to act and perceive without being physically present. Thus networked music is fertile territory for the composers, performers, and researchers that comprise the ICMA as both a potential means of overcoming challenging limitations of technology, as well as presenting new possibilities we have yet to imagine.

Submissions may consist of papers, with optional supporting short compositions or excerpts, audio-visual documentation of performances and/or other aspects related to your submission. Supporting audio and audio-visual material will be presented as art of the journal's annual DVD-ROM which will appear with issue 10/3. Related discussion will be located on the ICMA Array website, and additional multimedia at Organised Sound's Cambridge University Press website.

SUBMISSION DEADLINE: 1 March 2005

SUBMISSION FORMAT

Notes for Contributors and further details can be obtained from the inside back cover of published issues of Organised Sound or from:

http://uk.cambridge.org/journals/oso/

Email submissions should be sent to (please see SUBMISSION FORMAT above): os at dmu.ac.uk

Hard copy of articles (only when requested) and other material (e.g., images, sound and audio-visual files, etc.) should be submitted to:

Prof. Leigh Landy
Organised Sound
Clephan Building
De Montfort University
Leicester LE1 9BH, UK.

Editor: Leigh Landy
Associate Editors: Ross Kirk and Richard Orton
Regional Editors: Joel Chadabe, Kenneth Fields, Eduardo Miranda, Jøran
Rudi, Barry Truax, Ian Whalley, David Worrall
ICMA Representative: Mary Simoni
International Editorial Board: Marc Battier, Laurant Bayle, Hannah Bosma, Allesandro Cipriani, Simon Emmerson, Rajmil Fischman, David Howard, Rosemary Mountain, Tony Myatt, Jean-Claude Risset, Francis Rumsey

Posted by jo at 09:27 AM | Comments (0)

September 27, 2004

swimming together

tunApic2.gif

tunA

tunA is a handheld ad-hoc radio device for local music sharing created by Arianna Bassoli, Julian Moore, Stefan Agamanolis at the Media Lab Europe.

tunA is a mobile wireless application that allows users to share their music locally through handheld devices. Users can "tune in" to other nearby tunA music players and listen to what someone else is listening to. Developed on iPaqs and connected via 802.11b in ad-hoc mode, the application displays a list of people using tunA that are in range, gives access to their profile and playlist information, and enables synchronized peer-to-peer audio streaming. (from the website)

The idea is to use music to connect people at a local level and engender the sense of a shared experience.

Posted by newradio at 01:00 PM

September 07, 2004

Weather Report

WR7.gif

Drifting Voicemail, Found Sounds Stories

"A simple but clever installation in Tallinn, "Weather Report" by Janek Schaeffer, consisted of helium balloons and an MP3 player/cell phone. Recordings gathered by starting a cell phone call and then sending the phone up into the atmosphere attached to a weather balloon while recording the call on voicemail are remixed with other weather noises, to be played back on the cell phone in the installation." from ISEA 2004: Art Report by scott @ 10:10 pm, 09.05.04.

From Schaeffer's web site: "The weather balloons were used in 3 main ways. Firstly one icy winters morning I attached a mobile phone to receive and send low resolution sound, letting it float away from the surface of a frozen lake. Secondly using a digitaldictaphone I made time lapse recordings of the sky by floating it 500ft above my lush metropolitan neighbourhood. Lastly a few all American friends and I set out to shootthe balloon down late one June afternoon leaving the sound to parachute back to earth. I let the recordings speak for themselves, no effects, no eq, just straight cut & splice collage where you can hear the edges of time. Some other sounds collected include: underwater ice skaters; flapping; old meteorological kit; leaf blowers; repairing weather damage; various 60's archive audio; melting ice, ski across snow, Minnesota forecasts on radio, in the car and on TV; Tornado chasing & test equipment; snow flakes landing on mic; squeaky tree; National Weather Service balloon launch; walkie talkie tones from my Science Museum workshop etc. Oh, and all temperatures in Farenheight...

The result is a highly compressed Found Sound story, heard as a drifting voicemail message from the weather balloon. A hybrid documentary collected and edited outdoors, where it is designed to be heard walking with headphones... so.. go find a Walkman!"

Posted by jo at 10:25 AM | Comments (1)

September 04, 2004

Work by Don Ritter

Physical Aesthetics


"Unlike art from previous times, interactive art considers the viewer’s body as an integral component of the experience; the quality of this experience is measured by the term physical aesthetics, being the pleasantness or unpleasantness of the physical experience involved with the control of an interactive experience. The involvement of a viewer in an artwork could be viewed as being more democratic than the autocratic experiences presented by older art forms." Read the full interview.

voxpopuli-picon.gif

Vox Populi endows anyone with leadership, regardless of their personal wealth, family origins, or political connections. A video projected crowd chants "speech, speech" and encourages visitors to speak from a lectern. When a visitor speaks through the microphone, the voice is amplified and political speeches begin scrolling on a teleprompter located within the lectern. In response to a speaker’s voice, the crowd reacts with varying degrees of support and repulsion.

Other works by Don Ritter (Real movies available on site):

The Technophobe and the Madman: Internet-2 distributed musical
telecommunications video-music performance, 2001; 12 channels audio and 6 channels full screen video connection between Frederick Loewe Theatre, New York City and iEAR Studio, Troy, New York; collaborative performance: performers share audio and video between 2 cities.

Performance Telematique: Interactive telecommunications performance between two cities, 1993; PictureTel system, two channels full screen video and two channels audio Obscure, Quebec City, and McLuhan Program, University of Toronto, Canada (500 miles, 800 km).

Nervous Orpheus: Interactive telecommunications performance between two cities, 1994; PictureTel system, two channels full screen video and two channels audio; Vancouver and Toronto, Canada (2800 miles, 4500 km).

Posted by jo at 11:34 AM | Comments (0)

September 01, 2004

Observing the net

hellping_1_0001.gif

Ping Me a Melody

"Ping Melody is a music-net-performance. Temporary and unique state of all actions of Internet users has an influence on form of music composition. Musician (instrumentalist or singer) is playing on acoustic instrument/singing and sounds coming from instrument/voice are shared in packets of data information (granulated), then transmitted to selected Internet location (as "ping" unix command).

Most of net artists are using some specific possibilities of Internet: they are working with WWW, streaming, etc. In Ping Melody I treat Internet not so much as a tool which functions in given way, but as a space, that can be observed."

Posted by jo at 06:43 PM

August 20, 2004

Motherboard

banner01_185.gif

Ultra Hot Wired Live Art


In February 2003 a group of artists from Norway, Germany, Holland, Canada, England and Austria, working together on a project called Ultra Hot Wired Live Art initiated by Motherboard and BEK traveled to Kampala with 15 powerful Macintosh computers in their baggage. The aim was to create African computer art in an attempt to answer Brian Eno's question: "How can you africanize a computer"?

In collaboration with over twenty artists and musicians in Uganda, the artists held a workshop to share competencies in music and art production that utilize digital tools and the Internet. This was to develop a social and technical infrastructure for the creation of networked performances and future collaborations. More info

Posted by jo at 04:10 PM | Comments (0)

August 18, 2004

play that city

index.gif

Sonic City


Sonic City, the artists write, is a project exploring mobile interaction and wearable technology for everyday music creation. We have designed, implemented and evaluated a system that creates electronic music based on sensing bodily and environmental factors. Mapping these to the real-time processing of concrete sounds, Sonic City generates a personal soundscape co-produced by physical movement, local activity, and urban ambiance. Encounters, events, architecture, (mis)behaviours – all become means of interacting with or 'playing the city'.

In this project, our intention is to break out of traditional contexts for music creation to explore creative possibilities within local surroundings and mundane activities. Wearing Sonic City, anyone can experience a simple walk down the street as an expressive act, a path through the city as a personal composition. As a complement to lived urban experience, it is an intimate soundscape intended to enhance perception and encourage new uses of the urban landscape.

Posted by newradio at 07:17 PM | Comments (0)

August 03, 2004

Lag and Flux as Starting Point

trans_box_white.gif

Simultaneous Translation

John Roach is working on a multilocation networked audio performance called Simultaneous Translation which will involve players from USA, Spain, France, Italy and Germany. The first performance is slated for November 2004 in Madrid Spain. Other manifestations are planned for NYC and Trondheim Norway.

"In my first foray into networked performance entitled Negative Space it was the limitations of the streaming media which made the project difficult and exciting. This lag and flux is the starting point of Simultaneous Translation in which the idea of flux and slippage is put in the forefront and is compared to the slippages and mutations of language as it evolves. Another point of comparison is to the delays that occur on the web as data passes from router hop to router hop. In fact traceroute data will be used to manipulate audio streams from the remote participants, making the delay of the internet itself an active player in the project."

Posted by jo at 05:03 PM | Comments (0)

August 01, 2004

Distributed Opera

The BRAIN OPERA

gesture1.gif

Premiered in 1996, this interactive, musical journey was presented simultaneously in physical and cyber space. A viewer could perform live in the Brain Opera performances via " T h e P a l e t t e," musical instrument. This advanced, multi-user Java applet, was only active during Brain Opera performances. You could also see and hear live video and audio broadcasts of the Brain Opera performances.

The culmination of the Brain Opera experience was the 45-minute Final Performance--a 3-movement composition tied together the project's many musical, visual, and textual strands into a unified tapestry. Three performers shape, select, and interpret precomposed and audience-created elements, using specially-designed hyperinstruments: the Sensor Chair (which translates wireless body movement into sound); the Gesture Wall (a modification of the same instrument from the Mind Forest); and the Digital Baton (which combines pointing, squeezing/selecting, and motion sensing). A large curved screen and multiple projectors allows a constant flow of images to be presented in coordination with the music, illustrating Minsky's words, providing counterpoint to the music, or illuminating the actions of the performers. The audience is also invited to dance on a Sensor Carpet during the Finale, adding sound to the richness and intensity at the end of the work.

Posted by michelle at 04:51 AM

July 31, 2004

Historically Locating Distributed Performance

2004.04.05 Helen Thorington, writing about an early conference proposal:

The following are examples of distributed performances:

Finding Time (Jesse Gilbert, 2000). Finding Time linked performers on six continents in a series of live musical events, each an hour’s duration.

World Wide Simultaneous Dance (1998, Laura Knott). The project took place simultaneously in twelve countries.

In both examples the performers played to audiences in their performance spaces and to a worldwide Internet audience by means of especially created websites.

I then move on to a brief history, beginning with Kaprow's "Hello Hello" in the mid-60s--about which I can find very little information other than that it linked five sites in a television event. And on to Kit Galloway and Sherri Rabinowitz.

But the history is the not the question for me. Internet works resonate with historical issues. But what I'm beginning to think as I research for the conference is that we may already have witnessed the passing of the (pre)structured presentation of large-scale performance works--I keep thinking, for instance, that while Adrift (1997-2001) was very different from other works produced at its time, it was also very traditional. It was a work by three artists (the senders), in which its audiences had no agency at all--Like radio, like television, they were cast in the role of "receivers."

adrift_6.gif

From the Adrift performance

Now...when did it happen?...the Internet environment is in a far more participatory stage and people-interest runs to works/platforms/
environments in which the former viewer can play an active role. Which may begin to explain why there are really not very many interesting large performance works being produced at this time. Not only is the technology still difficult but the environment is no longer congenial.

Or am I wrong? And if not, what does this do for/to our conference?

I'm going to think about this for awhile. Right now my only thought is that if this is so, the conference itself should reflect it and include the many twists and turns performance has taken in the last years.

Posted by michelle at 01:22 PM | Comments (0)

July 30, 2004

Distributed Performance or the Left Hand Knows...

July, 1962:

Nam June Paik devised a piano concerto for simultaneous performance in San Francisco and Shanghai, with the left hand part being played in the USA and the right hand in China. The first telecast to Europe and America took place over Telstar 2 in July 1962.

Posted by newradio at 07:33 PM | Comments (0)

Networked Local Performances

Three Performances: 2001-03: A Comment

The three location-specific performances Dialtones: A Telesymphony, Flip Flop, and Texterritory --you had to be there to experience them--introduced in the last posts were produced between 2001 and 2003 They made use of networked technologies--mobile phones, video/audio relayed by wireless broadband--to involve their audiences in the creation of the performances. Each has done this successfully in its own way, while maintaining control, in the case of Dialtones, a tight control, over the performance itself.

Posted by newradio at 10:06 AM | Comments (0)

July 29, 2004

Networked Local Performance

Dialtones: A Telesymphony

Dialtones, as described on Golan Levin’s web site, "is a large-scale concert performance whose sounds are wholly produced through the carefully choreographed dialing and ringing of the audience's mobile phones. Because the exact location and tone of each participant's mobile phone can be known in advance, Dialtones affords a diverse range of unprecedented sonic phenomena and musically interesting structures."

Preparatory to the concert, members of the audience register their wireless telephone numbers at secure Web kiosks located in the performance space. In exchange for this information, they then receive seating assignment tickets for the concert venue. New "ringtones" are then automatically downloaded to their handsets. During the concert, a small group of musicians perform the phones en masse by dialing them up with a specially designed, visual-musical software instrument. "Because the audience's positions and sounds are known to the Dialtones computer system, the performers can create spatially-distributed melodies and chords, as well as novel textural phenomena like waves of polyphony which cascade across the crowd; these musical structures, moreover, are visualized by a large projection system connected to the performers' interfaces. Towards the end of its half-hour composition, Dialtones builds to a remarkable crescendo in which nearly two hundred mobile phones peal simultaneously."

Posted by newradio at 06:26 PM

July 27, 2004

Networked Streaming Audio Performance

forwarded by Shu Lea Cheang -

tramwire1.gif
TRAMJAM - VIENNA RUSHHOUR by Mumbai Streaming Attack
A multi-track-multi-driver mix hub streaming jam session of Vienna city vibe, orchestrated in sync with the city's tram routing schedule.
TRAMJAM is a project that Shu Lea Cheang started when teaching networked performance at Zurich's HGKZ. The work will go on to other cities where there are tramlines. The next stop- Rotterdam and the Deaf Festival ’04 Affective Turbulence: The Art of Open Systems in November.


Thanks for sharing this event Shu Lea.
Would you comment on your involvement in this project?
Can you, for instance, compare the Zurich and Vienna experiences?
Is there online documentation of the Zurich events that you can direct folks to?

Collaborative and collective process is really important to us (blog & conference organizers) and we see it as a core component of networked_performance - both within groups developing work, and with the inclusion and contribution of viewers/users/participants/ to the work. The website notes that 'the Mumbai Streaming Attack study group expands to include local participants for each performance'. Do you mean in the respect that participants can upload sounds and contribute in that manner - or do you also connect with local artists for implementation/augmentation of prior iterations of the work? Would you talk about the technology and process involved?

Would you give us an overview of your personal practice and describe how this particular project is a continuation of that or has grown out of your former practice?
Would you comment on your specific involvement in this project?

ABOUT THE WORK: Each driver/jammer conceives and collects soundfiles for a chosen tram route, joins together on location and online to perform collective impressions of Vienna city limits, whose tramlines connectivity forms the city's transport mainframe.

Along each tram route, personal, social and political association are considered for the composition of the specific sound files. Local sound artists, hackers, jammers are invited to “drive” and “mix” the (sound)stracks. The city's tram info broadcast cuts in to inform the happenings by the minute as the trams cable through the cities.

TRAMJAM also extends its programming to involve the city's mobile community. Through mobile devices, the transporting public are invited to send in voice/sound/texts messages for the mix.

The live performance is streamed live on the internet and on local radio. We invite the travelling public to listen in with their travelling radios. The orchestration of the tramlines/tramtracks is self-directed syncopation with open source soundfiles that are accessible to all participants.

Mumbai Streaming Attack is a networked performance study group currently based at SNM/HGKZ in Zurich. The group first performed TRAMJAM-ZURICH RUSHHOUR for broadcast at reboot.fm (Berlin) in February, 2004. The group's members expand to include local participants for each tram city performance.


Posted by michelle at 01:58 PM | Comments (2)