July 10, 2007



A return to physical devices

The 2nd part of Don Norman’s two part essay on “The Next UI Breakthrough” [RESTRICTED ACCESS] appears in the July/August 2007 edition of ACM Interactions. In it, he describes how physicality is now being re-introduced into the user interface for computers. He describes physicality as more extensive than tangible computing and “embodiment” (Paul Dourish’s explication of computing that is both social and tangible because the manipulation and handling of objects is always part of social activities.) For Norman, “physicality” is something new in that it is a return after a period of user interfaces in which mechanical manipulation of things like knobs and dials and switches was avoided in favor of things like the so-called soft switches. Now switches and dials have returned, as have more advance interface forms that can respond to gesture.

Physicality: the return to physical devices, where we control things by physical body movement, by turning, moving, and manipulating appropriate mechanical devices.

We have evolved as physical creatures. We live in a complex, three-dimensional world filled with physical objects. We are analog beings in an artificial world of digital devices, devices that abstract what is powerful and good from the physical world and turn it into information spaces, usually in arbitrary ways. These new approaches put the body back into the picture. They require us to control through physical action rather than virtual, which means through mechanical devices, not electronic or graphic.

I’d speculate that part of the return has to do with the fact that the old metaphors of turning a knob, for example, are effective and meaningful. But beyond just switches and control inputs, Norman seems to be speculating that the return to physicality will present opportunities for new kinds of interaction, beyond just switching things on or off, and, hopefully, for interaction beyond typing. Despite Norman’s observation that we’re returning to physical controls, one form of physical control — the keyboard — has never left. I’d be interested in forms of user interface that didn’t even touch the kinds of computing models that keyboards enforce. Can there be a form of computing that relies entirely on physical movement rather than tapping on little plastic squares? [blogged by Julian Bleecker on techkwondo]

Posted by jo at 06:34 PM | Comments (0)

June 07, 2007

Scratch, Media Lab Video

Scratch is a new programming language that makes it easy to create your own interactive stories, animations, games, music, and art -- and share your creations on the web. Scratch is designed to help young people (ages 8 and up) develop 21st century learning skills. As they create Scratch projects, young people learn important mathematical and computational ideas, while also gaining a deeper understanding of the process of design. Scratch is available free of charge, go to Download. Currently available for Mac OSX and Windows.

Posted by jo at 01:21 PM | Comments (0)

May 11, 2007

Visual Voice Pro


An Ultra-Responsive Environment

Visual Voice Pro creates an immersive, reactive digital playspace. The installation is comprised of a sensitive microphone, a computer with a data projector, and custom software written especially for the space. The microphone listens to all of the sounds in the room, from tiny footsteps to laughter to singing or even banging a drum. The computer instantly processes the sounds to create abstract, beautiful graphics. Action and re-action are clearly, vibrantly displayed. When the room is quiet, the scene falls still and dark. When there is noise there is activity. The louder the noise the bigger the effect.

"The entire experience has been designed based on requests from parents and care-givers of people with autism, cerebral palsy, and all sorts of behaviour disorders," says Adam Montandon, "So I am very keen to hear what anybody thinks."

Adam Montandon is the Director of HMC Interactive. You can contact him at Grosvenor House, Belgrave Lane, Plymouth, PL4 7DA, UK :: + 44 (0)845 20 11 462 :: Mob: + 44 (0)772 17 36 021.

Posted by jo at 09:46 AM | Comments (0)

April 15, 2007

Turbulence Commission: The Simultaneous Translator


ONLINE Performance Today

Turbulence Commission: The Simultaneous Translator by John Roach and Willy Whip [Requires Windows OS] LIVE PERFORMANCE: Sunday April 15; 12:00 PM EST to 3:00 PM EST [Mac users can listen via the player of their choice].

"The Simultaneous Translator" (SimTrans) is a Windows based audio interface that enables anyone to load audio streams and manipulate them in real time on the Internet. SimTrans makes the delays and fluctuations of the Internet visible and audible. The Internet becomes your collaborator as you create your mix, and the instability you usually try to avoid becomes a tool for creation. Distance and delay are manifest within the interface numerically and as a series of sliding heads; there is also a link to Google Earth where you can watch the dynamic flight of data travel between yourself and the audio source.

“SimTrans” is a 2007 commission of New Radio and Performing Arts, Inc., (aka Ether-Ore) for its Turbulence web site. It was made possible with funding from the Murray G. and Beatrice H. Sherman Charitable Trust.

THE PERFORMANCE: "The Simultaneous Translator" grew out of the artists’ live networked performance project "Simultaneous Translation," in which the delays of the internet are used to dynamically effect the live performances of geographically distant artists.

The upcoming performance will take place from 12:00 PM EST to 3:00 PM EST on Sunday April 15. Log on via http://turbulence.org/Works/simtrans.

Participants: Greg Davis (USA), Kenneth Goldsmith (USA), John Hudak (USA), Keyman (France), Lawrence Li (China), Mice69 (France), Miguel Ramos (Spain), Joe Reinsel (USA), John Roach (USA) and Willy Whip (France).


JOHN ROACH doesn't consider himself an installation artist, a sound artist, or a sculptor, but prefers to think of himself as a nomad, touching down in whatever place is most hospitable to his ideas. Recent projects have been an installation at the 2B Gallery in Budapest, Hungary; a collaborative performance with objects and video at the Saint Stephen Museum in Szekesfehervar, Hungary; and a web video project called Sweet Music. He continues to work with Willy Whip on their long-standing live networked performance project Simultaneous Translation.

WILLY WHIP is a designer and teacher in hypermedia interactivity. Outside his institutional work he likes to produce mashups that fertilize his own secret garden. This personal research and development leads him on a quest for hybrids: connect this information to that information; grow new contents; release new senses. Recent activity includes projects with the artists Anika Mignotte, Reynald Drouhin, and Du Zhenjun.

Posted by jo at 11:55 AM | Comments (0)

April 10, 2007

Turbulence Commission: The Simultaneous Translator


Live Performance on April 15

Turbulence Commission: The Simultaneous Translator by John Roach and Willy Whip [Requires Windows OS] LIVE PERFORMANCE: Sunday April 15; 12:00 PM EST to 3:00 PM EST

The Simultaneous Translator (SimTrans) is a Windows based audio interface that enables anyone to load audio streams and manipulate them in real time on the Internet. SimTrans makes the delays and fluctuations of the Internet visible and audible. The Internet becomes your collaborator as you create your mix, and the instability you usually try to avoid becomes a tool for creation. Distance and delay are manifest within the interface numerically and as a series of sliding heads; there is also a link to Google Earth where you can watch the dynamic flight of data travel between yourself and the audio source.

“SimTrans” is a 2007 commission of New Radio and Performing Arts, Inc., (aka Ether-Ore) for its Turbulence web site. It was made possible with funding from the Murray G. and Beatrice H. Sherman Charitable Trust.

THE PERFORMANCE: The Simultaneous Translator grew out of the artists’ live networked performance project "Simultaneous Translation," in which the delays of the internet are used to dynamically effect the live performances of geographically distant artists.

The performance will take place from 12:00 PM EST to 3:00 PM EST on Sunday April 15. Log on via http://turbulence.org/Works/simtrans.

Participants: Greg Davis (USA), Kenneth Goldsmith (USA), John Hudak (USA), Keyman (France), Lawrence Li (China), Mice69 (France), Miguel Ramos (Spain), Joe Reinsel (USA), John Roach (USA) and Willy Whip (France).


JOHN ROACH doesn't consider himself an installation artist, a sound artist, or a sculptor, but prefers to think of himself as a nomad, touching down in whatever place is most hospitable to his ideas. Recent projects have been an installation at the 2B Gallery in Budapest, Hungary; a collaborative performance with objects and video at the Saint Stephen Museum in Szekesfehervar, Hungary; and a web video project called Sweet Music. He continues to work with Willy Whip on their long-standing live networked performance project Simultaneous Translation.

WILLY WHIP is a designer and teacher in hypermedia interactivity. Outside his institutional work he likes to produce mashups that fertilize his own secret garden. This personal research and development leads him on a quest for hybrids: connect this information to that information; grow new contents; release new senses. Recent activity includes projects with the artists Anika Mignotte, Reynald Drouhin, and Du Zhenjun.

Posted by jo at 09:20 AM | Comments (0)

March 23, 2007



Free OS dedicated to Real Time A/V Processing

pure:dyne has been created to provide a complete and ready made environment for artists and developers who are looking for a free operating system dedicated to real time audio and video processing. The pure:dyne project provides tools and an optimized platform to try out and work on a large range of applications. It comes with optimized and tested software such as Supercollider, Icecast, csound, Packet Forth, fluxus and much more, including of course Pure Data and a great collection of essential externals and abstractions (PDP, PiDiP, Gem, GridFlow, RRadical, PixelTango ...). The Studio "classics" have not been forgotten (Ardour, LADSPA, seq24, Audacity ...) and numerous essential graphics software are also bundled (Inkscape, Gimp, Blender ...).

The pure:dyne project is a growing community effort maintained by media artists for media artists. It is an ideal platform for audiovisual performances, installations and FLOSS+Art workshops and courses.

Key Features:
- Jack oriented GNU/Linux distribution
- Complete FLOSS Home studio solution and live performance system
- No bloat ! fast minimal desktops (fluxbox, evilwm, dwm)
- Live Distribution (works the same on CD, HD, USB, ...)
- UnionFS system and multiuser support
- Low Latency Kernel (Ingo Molnar's -rt kernel patches)
- Support for Firewire audio devices (FreeBoB inside)
- Optionnal support for NVIDIA and ATI evil blobs
- mmx + sse gcc optimisations applied when relevant
- Compatible with any dyne:II modules
- Easy installation, maintenance and update
- SDK tools (dynesdk + milkman) and gcc included for customisations
- International Languages support
- Mactel support
- More modules available via direct download

Website/Doc -> http://puredyne.goto10.org
IRC server -> irc.goto10.org #pure:dyne
MailingList -> http://lists.goto10.org/cgi-bin/mailman/listinfo/puredyne


Direct Download mirrors
Torrent (thx to Mute public library)

pure:dyne developers are :
Aymeric Mansoux - http://320x200.goto10.org
Chun Lee - http://sonicvariable.goto10.org
Antonios Galanopoulos - http://stereocilia.goto10.org
Karsten Gebbert - http://krgn.net
Rob Canning - http://www.robcanning.info

GOTO10 would like to thank jaromil (http://rastasoft.org) for providing a very nice base system to work on and on which we can grow pretty things. We would also like to thank everyone on the pure:dyne mailing list for their constant help, feedback and support.

Posted by jo at 01:05 PM | Comments (0)

March 08, 2007

Second Life web 2.0 tools


Twitter and Sloog

The Beautiful Simplicity of Twitter (and BlogHUD): Second Life resident Koz Farina, creator of the very popular BlogHUD tool for blogging from within SL, is developing a system to allow you to cross-post your BlogHUD posts to your account on Twitter, the hot new social site that lets you miniblog along with your friends. Koz is already feeding all BlogHUD posts to a Twitter BlogHUD page. This is just the latest entry into a growing pool of Twitter-to-SL mashups from people like Ordinal Malaprop and Kisa Naumova, among others. And in fact, there’s been a huge flowering of ancillary Twitter apps since the service launched last fall. Why? Because Twitter is incredibly compelling, for a number of reasons. One of the most important, in my opinion, is the almost complete lack of button-based features that Twitter offers to its users. (Although I’d love for someone to build the wish app described at the end of this post.) More » [posted by Mark Wallace on 3pointD]


Sloog Gets its Tags Up in Second Life: I’m going to go ahead and call Sloog the best Web 2.0-style site for the virtual world of Second Life that I’ve seen yet. Sloog lets you tag locations in Second Life, storing them in the Sloog system so you can access them later on the Web. It’s a bit like del.icio.us tagging or Digg-ing virtual places, and it’s not a new function for SL users, but this is the best implementation of it I’ve seen yet. More » [posted by Mark Wallace on 3pointD]

Posted by jo at 12:15 PM | Comments (0)

February 07, 2007



Shows Video in Living 3D

Gemotion is a soft, ‘living’ display that bulges and collapses in sync with the graphics on the screen, creating visuals that literally pop out at the viewer.

Yoichiro Kawaguchi, a well-known computer graphics artist and University of Tokyo professor, created Gemotion by arranging 72 air cylinders behind a flexible, 100 x 60 cm (39 x 24 inch) screen. As video is projected onto the screen, image data is relayed to the cylinders, which then push and pull on the screen accordingly. “If used with games, TV or cinema, the screen could give images an element of power never seen before. It could lead to completely new forms of media,” says Kawaguchi.

The Gemotion screen will be on display from January 21 to February 4 as part of a media art exhibit (called Nihon no hyogen-ryoku) at National Art Center, Tokyo, which recently opened in Roppongi. [blogged on Pink Tentacle]

Posted by jo at 12:46 PM | Comments (0)

January 11, 2007

InSense: Interest-based life logging


Mapping First Life Experiences with Virtual World Counterparts

Blum, M. Pentland, A. Troster, G. (2006), InSense: Interest-Based Life Logging, IEEE Multimedia, 13 (4), pp. 40- 48.

The paper describes a wearable data collection device called InSense based on Vannevar Bush’s Memex principles. allows users to continually collect their interactions as store them as a multimedia diary. It basically take into account the sensor readings from a camera, microphone, and accelerometers. The point is to “classify the users activities and “automatically collect multimedia clips when the user is in an “interesting” situation“.

What is interesting is the types of categories they picked-up to develop their context-aware framework: they chose location, speech, posture, and activities—to represent many diverse aspects of a user’s context. They also have subcategories (for instance for location: office, home, outdoors, indoors, restaurant, car, street, shop).

The experience sampling approach works like that:

Subjects wear the system for several hours without interacting with it. Audio and acceleration signals are recorded continuously. The camera takes pictures once a minute and WiFi access points are logged to establish location. After the recording session, the user employs an offline annotation tool, which presents an image at a time, the corresponding sound clip, and a list of labels from which to chooseshowing sensor placement.

What is also curious is their description of their algorithm that calculates the current level of interest of an event based on the context classification.

Why do I blog this? I am less interested in the purpose of the system itself (sharing material) but rather by the data extracted from context readings and how this could be used to tell a story (or to build up a narrative). Of course, given my interest in games, I see this device as intriguing and potentially relevant to map the first life experience with virtual worlds counterparts; it could go beyond current pedometer that control dogs. [blogged by Nicolas on pasta and vinegar]

Posted by jo at 02:34 PM | Comments (0)

November 10, 2006

In the Chair


Web 2.0 Replaces Music Teachers

"SAN FRANCISCO -- In the cacophony of mashups, widgets and collaboration tools demoed at the third annual Web 2.0 Summit here, one rang out as the biggest crowd pleaser: a musical instrument instruction web app with a golden ear and infinite patience.

Called In the Chair, the application encourages music students and budding guitar heroes to play along with real musicians by syncing scrolling sheet music with a prerecorded video of a band playing the song. The app monitors your playing through your computer's microphone, and gives you instant feedback on whether you played the right note with the right timing.

In the Chair CEO David Evans observed correctly that the application turns music practice into something closer to a video game. He also announced a new web component that lets composers and teachers alter, remix or collaborate on each other's uploaded sheet music..." Continue reading Web 2.0 Replaces Music Teachers by Michael Calore, Wired.

Posted by jo at 09:06 AM | Comments (0)

October 16, 2006

Interview with Burak Arikan


Pinkie and Open I/O ...

Since John Maeda's group at the MIT Media Lab has been renamed from Aesthetics + Computation Group (out of which Processing emerged) to Physical Language Workshop, also the focus of interest there slightly shifted. Physical hints less at physical computing than on a general interest in the mechanisms in which the world of networked systems interacts with the various worlds around it, for example with the world of art and that of economy.

We will have a closer look at the current main project Openstudio next week and start off with an interview of PLW's most recent graduate, Burak Arikan. In his work, he is indeed taking the interaction to a physical level by designing tools for "exploring electronic compositions through social means" in his projects Pinkie and Open I/O:

Burak, could you please tell us a bit about your background?

I grew up in Istanbul. It is a very complex place to live. Many social classes, lots of immigrants, beautiful geography, various religions, east west and in between, layers of culture, etc. etc. As a kid I played both on the streets and at home with computers.

I started designing and developing web sites with my friends during my civil engineering studies at Yildiz Technical University. I did some some personal projects and commercial work. After finishing the engineering degree, I went to graduate school and studied Visual Communication Design at Istanbul Bilgi University. While studying communication design, I also worked as a visual designer and information architect with several teams and Internet companies in Istanbul. I moved to Baltimore in 2003 and built commercial projects including community web sites, telecom / web products, shopping sites, and social networking services. I ceased all the commercial activity in the summer of 2004 to attend graduate school at MIT. [continue reading at we-make-money-not-art]

Posted by jo at 10:35 AM | Comments (0)

August 08, 2006

Exploded Monologues


Transcending the Bodies' Boundaries

Exploded Monologues, by Troika, is a wearable machine that allows the user to literally send his voice on one or more of the four different speakers, each of them located approximately one meter from the mouth of the user. A control interface with four different buttons enables the user to choose where his voice will be heard. An additional mask containing a microphone is worn to prevent the listener from hearing the performers voice directly.

Exploded Monologues starts from a reflection on the dialogue through our new technologies of communication and the way they explode the boundaries of our bodies. Exploded Monologues is a performance tool that expands the possibilities of the narrative. The user is able to simulate different characters and play with the sonic space perceived by the listener. [via]

Posted by jo at 12:20 PM | Comments (0)

July 19, 2006

justin MC THIS hammond


Mobility is Freedom

MC THIS- Featuring Live Mobile Video Projection @ Festivals & Screenings What is MC This? Provoking wonder, compelling, awe inspiring! In a nutshell, a patent pending, mobile video projection unit, created and performed by justin MC This hammond. MC This is the only artist in the world who has brought visual projection from the interior to the streets, rooftops, forests and deserts in one step. MC This can project live images from 4 or more video sources, mix them, add effects to them and project them anywhere he goes on a number of unique screen surfaces. MC This is innovative and technologically superior. Eye catching and jaw dropping to say the very least. MC This has done plenty of private / corporate parties and events. Incorporating live feeds of performers ,attendees, sponsors logos and layering mind-bending graphics. GUARANTEED, technical amazement for your attendees! Mobility is Freedom! [via MAKE]

Posted by jo at 06:56 PM | Comments (0)

July 05, 2006

Handheld Augmented Reality


PDA + Studierstube 4.0

Handheld Augmented Reality: A standard, off-the-shelf Personal Digital Assistant (PDA) constitutes a cost-effective and lightweight hardware platform for Augmented Reality (AR). A PDA provides a simple, well-known user interface, and is fully equipped with a touch-screen and camera for providing a video see-through Magic Lens metaphor of interaction. In our Handheld AR framework, all interactive processing is done exclusively on the PDA without relying on a server infrastructure, which makes this solution highly scalable. Because of the low cost and suitable ergonomic properties of the PDA platform, massive multi-user AR application become possible for the first time.

It is the goal of this project to demonstrate the first multi-user AR applications with dozens of simultaneous users.

In our Handheld AR framework, all interactive processing is done exclusively on the PDA without relying on a server infrastructure, which makes this solution highly scalable. Because of the low cost and suitable ergonomic properties of the PDA platform, massive multi-user AR application become possible for the first time. It is the goal of this project to demonstrate the first multi-user AR applications with dozens of simultaneous users.

Our software framework Studierstube 4.0 represents the first attempt at creating a complete solution for AR on PDAs. It operates cross-platform (Windows, Linux, Windows.CE) and addresses graphics, video, tracking, multimedia playback, persistent storage, multi-user synchronisation and application authoring tools.

Posted by jo at 08:39 AM | Comments (0)



3D Technique Remains a Mystery

Eyeliner is a new projection technique that enables projection of 3D phantom images to blend with the physical space in a very convincing way. The technique consists of High Definition projections that are projected onto an invisible foil material set up in a special angle. Using the Eyeliner it is possible to build virtual set designs and let animated or filmed objects and characters blend in with real people and spaces. The Eyeliner is thus capable of creating a new and different narrative space on stage. A narrative space which makes use of special effects otherwise only known from movies and computer games.

The Eyeliner is partly developed by the Danish company Vision4 which also holds the Danish rights and know-how to use this new technique.

For a filmed example of the use of the Eyeliner in the staging of H.C. Anderson’s The Steadfast Tin Soldier click here.

Above is a visual example from the 3D performance True Love Is Yet To Come at Donna Karan’s Studio in New York City. The perfomance marked the opening of last year’s Performance Biennale Performa05 in NYC. [blogged by Lene Mailund on Digital Experience]

Posted by jo at 08:25 AM | Comments (0)

June 24, 2006

Nathaniel Stern: Creative Commons Artist in Residence


Shares [odys]elicit Under CC/GPL

Friday 23 June 2006, live from the iCommons iSummit:

[odys] elicit - a full-body, interactive art installation circa 2001 - is now available under a Creative Commons By Attribution Non-Commercial Share-Alike license, and the source code is available under GPL. It works with almost any standard webcam (requiring a few drivers on PC)!

[odys]elicit is a large scale, interactive installation where every movement of the viewer, small or sweeping, births stuttering text onscreen. The viewer’s motion elicits, character by character, passages from odys’ text. The piece responds to small movements, writing the text onscreen slowly for the viewer to read, or to rapid passersby, whose full bodies birth hundreds of flying characters, impossible to decode.

In odys’ work, viewers are forced to look at the spaces between language and meaning, the luxuries of stuttering and silence as communication, and the effects of accelerated and decelerated time. [odys]elicit physically places viewers at the center of co-invented noise, forced to perform - willingly or not. odys’ text has been reduced down to where it no longer has meaning and is re-birthed, with possibly infinite meanings, or none at all.

Click here to see videos of the piece in action.


Please first check out the read_me file - the PC application requires some extra (free) installs, and you can very easily change video settings or sources, input new text, toggle between birthing letters or full words, adjust the motion tracking tolerance levels given different lighting, or change the direction the text will go on the fly! Send me error messages if you encounter any bugs, or have much success! I am yet to fully test the OS 9 or PC versions. Oh, and about any parties or exhibitions that this thing is a hit at, too ;)

OS X Application
OS 9 Application
PC Application (still testing)
Source Code (in Director/Lingo + TTC-Pro; demo versions of these will work)

@ Rio iSummit [posted by nathaniel on nathanielstern.com]

Posted by jo at 10:38 AM | Comments (0)

June 08, 2006



Easily Create Rich Multilayered Living Music

I have been super busy getting IntelligentDesigner* (ID) ready for general consumption. Officially its due this Fall - but you can check out its most recent iteration at two upcoming events. If you are in Pittsburgh (PA, USA), it will be part of The Urban Garden Party at the Mattress Factory this Friday, June 9th. If you are in Williamsburg (Brooklyn, NY, USA) next Friday, June 15th, the show with the ChristianScienceMinotaur is sure to be a trip. (links with details follow).

The Mattress Factory event will incorporate dance, projection and sound, as performance artists move their bodies in relation to the living sounds manipulated in real time.

At Monkeytown ID will be aurally and visually woven into the improvisational musical act of ChristianScienceMinotaur. By way of a bongo-height touch screen kiosk, audience members can alter the music that Nat and Leo (XSM duo) play. The audience gets to DJ with live living musicians.

I am diligently working on getting my website in order to better communicate what this is all about. In the meantime, the best way to understand this new musical paradigm is to go check it out for yourself.

Mattress Factory:



Jason Van Anden's website:

* IntelligentDesigner is net software that enables pretty much anyone to control things in an uncontrollable way. ID was originally invented to enable improvisational behavior simulating human emotional mechanics between his life-size emotive robots Neil and Iona. In its current incarnation, ID can be used to easily create rich multilayered living music from samples, with many more esoteric applications coming down the pike. Details will be found on Jason Van Anden's website, smileproject.com, soon.

Jason Van Anden

Posted by jo at 08:17 AM | Comments (0)

April 27, 2006

Social Video


Ten Video Sharing Services Compared

"The number of video-sharing sites has shot through the roof recently, as dozens of companies try to become the Flickr of the online video world. To this end, many video services have started offering new features like editing and remixability in an attempt to snatch a piece of the ever-expanding online video pie. But for the average user--who just wants to post a video on the 'net and share it with some friends--there are already too many options out there. All one really wants to know is, which site is going to work, with the least amount of hassle?

I took 10 of these sites [Eyespot, Google Video, Grouper, Jumpcut, Ourmedia, Revver, Videoegg, Vimeo, vSocial, YouTube] out for a test drive, and picked some winners. If you want to post, watch, share, or edit video online, this post's for you.

To test each service, I uploaded my demo reel (a 15MB Sorenson 3-encoded Quicktime file) to each site and compared video quality, site interface, community features, and functionality. Where applicable I also tried to embed the resulting video in a Wordpress page. Many of these sites are still in beta, and their functionality could change in the coming months, but if you're looking to post and share video today, this is the current state of things. [posted by Ryan Bilsborrow-Koo on DVguru]

Posted by jo at 08:00 AM | Comments (0)

March 26, 2006



Technomadics: Camping Gear for Connectivity

Steve Roberts has created the Shacktopus, an easy-to-pack communications platform with multiple sensing devices. He writes: We are talking about extreme technomadics. I have spent over two decades wandering the US on computer-laden bicycles, embedding systems into amphibian micro-trimarans and geeked-out kayaks, and otherwise building a career around the tools of high-tech adventure. But all those systems, despite their utility in opening doors wherever I wandered and rendering my physical location irrelevant, had one fundamental flaw: they physically incorporated the essential electronics. It's not like there was a choice, of course... the BEHEMOTH bicycle weighed 580 pounds, about ten times more than I could imagine carrying around in a pack. It was cool to have satellite email while pedaling across Iowa in 1991, but if I was more than a few hundred feet from the bike, that $1.2 million package of custom gizmology was essentially useless.

Things have changed. Lithium-Ion batteries, power-miserly microprocessors with kick-ass performance, tiny Wi-Fi beam antennas with ten-mile range, ultraminiature all-mode DC-to-daylight transceivers, gigabytes on flash, Linux on a tiny PC board... suddenly, one can pack a LOT of communication systems, computing tools, and electrical power into a shockingly small enclosure. The engineering problems are daunting, with a variety of protocols, signal levels, and serious noise issues, but with care, enough gear for open-ended untethered information survival can fit in a shoulder pack.

And that's what Shacktopus is all about. Named in honor of its ham-radio roots coupled with its multi-pronged design, this is a complete Shack-to-Go with added Internet access, adaptable power system, remote control, environmental sniffing and data logging, tracking and telemetry, multiple audio options, on-board security system, robot-operator and logging capability, universal audio filter, synthesized speech response and event notification, and, well, the list goes on for quite a while. It's BEHEMOTH in a pack, only more so.

Posted by jesse at 11:36 AM | Comments (0)

March 17, 2006

Jon Udell on Screencasting


The Evolution of a Wikipedia page

"Heavy metal umlaut: the making of the movie: When Wikipedia's page on the heavy metal umlaut made the rounds of the blogosphere recently, I decided to make a documentary screencast that would illustrate and narrate Wikipedia's editorial process. The screencast was well received, and a number of people wrote wondering how I made it. In this column I'll explain the process, then conclude with some general thoughts about the value of the screencast medium. [Full story at O'Reilly Network]"

"Making movies of software: In the same way that blogging has radically democratized basic web publishing, I expect that Windows Media Encoder -- along with counterparts that I hope become broadly accessible on other platforms -- will democratize the use of screen videos. The medium needn't be available to professionals only. Lots of folks need to describe, demonstrate, or document the behavior of software, and this is a powerful way to do it. [Full story at O'Reilly Network]

In the fourth installment of my Primetime Hypermedia series, I show how making a screen video can be almost as straightforward as taking a static screenshot. I continue to be fascinated by this medium. The ability to capture, narrate, and share software experiences -- as I did recently here and here -- enables an important mode of communication that we've barely begun to exploit." From All About Screencasting.

[via Howard on Smart Mobs]

Posted by jo at 04:20 PM | Comments (0)

Enter – Explorations in New Technology Art



enter – explorations in new technology art is pleased to announce that it will be commissioning three media art projects in 2006/07. enter is a festival that forms a key part of Arts Council England East’s overall strategy for arts and new technologies. Comprising of individual events and culminating with an international festival in April 2007 in Cambridge, enter is also designed to be a banner under which communities, arts organisations and practitioners in the field of new technology art can interact; sharing ideas, opinions and knowledge.
enter will work in partnership with a number of regional and national arts organisations and the higher education and business sector.

Unknown territories: Adventures in Space is the title of the festival’s showcase programme. The festival will take place between 25-29 April 2007 and consist of a showcase of events and a conference programme, focusing on two main thematic strands: artist’s interaction with audiences and space (public / private / virtual / augmented); and bridging gaps and linking innovative ideas between new technology art, science and business.

COMMISSIONS: We are looking for projects that engage with audiences regionally, nationally and/or globally. Although Cambridge is at the heart of festival, the commissions should have a wider geographical and time-based reach.

For a limited time, we want to create an augmented environment within Cambridge and beyond where passers-by and the festival audience alike randomly encounter media art; influencing their understanding of space and interaction, working with people’s responses to technology and stimulating new experiences. enter embarks with them onto their journey into unknown territories at unexpected locations within the city and its surroundings, including gateways, transport infrastructure, commercial areas or hidden pathways. Detailed Call; Submission Form.

Posted by jo at 02:05 PM | Comments (0)

March 14, 2006



Tool for Integrating Sensors, Sounds, Devices as Interactive Objects and Environments

NADA affords both the technical novice and expert an unified platform for prototyping physical interfaces and digital content - from handheld product concepts to reactive environments that can be connected across the Internet.

Working models that require less work... NADA is designed for maximum versatility and efficiency. It is a true cross-platform application, and it connects to a variety of commercially available hardware for bringing digital information into and out of standard PC’s and Macs. Supported hardware is automatically detected and auto-configured, minimizing complexity for the user. Project authoring with NADA can be done in either Macromedia Flash MX2004 (or later) or Java™. Even beginners without programming experience can design and prototype with sensors, actuators, lights, switches and animation using an intuitive graphical environment. Graduate to finer levels of control by developing projects using the NADA API for ActionScript and Java™. Below is the look of the NADA interface. [via MAKE]

Posted by jo at 08:45 AM | Comments (0)

March 13, 2006

Sagasnet Workshop:


Developing Narrative Games/On-line Worlds

Call for participation Sagasnet workshop on Developing Narrative Games/On-line Worlds; Headed by Ernest Adams; Guest Speaker: Jessica Mulligan. March 31 2006 - April 06 2006, Academy for TV and Film, Munich. Germany.

In this intensive five-day workshop, the participants will work both alone and in teams to develop a design treatment for a narrative game/an online world. They will begin by studying the fundamental principles of computer game design, and then go on to explore the relationship between interactivity and narrative - in single player games as well as in online worlds. There will be a few lectures, but most of the activities will be fully hands-on, involving brainstorming, discussion, and presentations from the participants to the group.

BIO__Ernest Adams: Ernest Adams is a game designer, writer and teacher, working as part of the International Hobo design consortium. Adams graduated from Stanford University (California, USA) with a Bachelor of Arts degree in Philosophy in 1986. Initially, he worked as a software engineer in the electronics industry, but switched to game development in 1989, joining a company called Interactive Productions (later P.F.Magic). There he wrote the PC client for an early America Online game, RabbitJack's Casino. He also did the programming for Third Degree, a multi-player party game for the short-lived CD-I player. Moving to Electronic Arts in 1992, Adams became a game designer. He designed the first-ever CD-ROM edition of John Madden Football, for the 3DO Multiplayer. For the next several years he served as the audio/video producer for the Madden series, and under his guidance Electronic Arts perfected the technique of assembling sentences from audio fragments to produce seamless play-by-play commentary. During this period Adams also helped to produce the Game Developers' Conference and founded the International Game Developers' Association, the first worldwide professional society for game developers. In 1999, Adams moved to Great Britain to become a lead designer at Bullfrog Productions, a subsidiary of Electronic Arts. There he worked on two prototype projects, Genesis: The Hand of God and Dungeon Keeper 3. In 2000 Adams left Bullfrog to become a design consultant, joining the International Hobo consortium. He has worked for both large industry publishers such as THQ and Ubisoft, and smaller clients with special projects, including the Guinness Book of World Records and Zoo Atlanta. He writes a regular column on game design called "The Designer's Notebook" for the Gamasutra developers' webzine, and has coauthored a book on the subject, Andrew Rollings and Ernest Adams on Game Design. He has also written a second book Break Into the Game Industry: How to Get a Job Making Video Games. Adams lectures frequently at colleges, art festivals, and game conferences, and conducts game design workshops around the world. His professional web site is at www.designersnotebook.com.

BIO__Jessica Mulligan: Jessica Mulligan is a 20-year veteran of the online games industry. Recently as Executive Producer and Creative Director at Turbine Entertainment, she oversaw the award-winning MMORPG franchise, Asheron's Call. Before Turbine, Ms. Mulligan held a number of positions within the games industry, including president of The Themis Group, vice-president of development for Engage Games Online, director of online services for Interplay Productions, a director on Ultima Online for Electronic Arts and director of operations at MM3D, Inc on the Middle Earth Online project. Ms. Mulligan has been involved in the design, development or management of more than 50 online games, including a dozen persistent worlds. A respected writer on industry issues, she was the author of the popular "Biting the Hand" column on Skotos.net for over six years. Her most recent book, "Developing Online Games: An Insider's Guide," with co-author Bridgette Patrovsky (bridgette[at]mm3d.com), was released in the US by New Riders in March 2003 and has since been published in localized versions worldwide, including Korean and Chinese.

sagasnet is a non-profit vacational training initiative in the frame of the MEDIA Programme of the European Commission. The workshops are designed in the first place for European writers, designers, developers, directors, producers. Participation fee: freelancer EUR 800,00, company delegates EUR 2000,00 There is a limited budget for scholarships available.

sagasnet & sagas writing interactive fiction
c/o Bayerisches Filmzentrum
Bavariafilmplatz 7
D-82031 Muenchen-Gruenwald
tel + 49 89 64 98 11 29 /30
fax + 49 89 64 98 13 29/30
mobile + 49 (0) 171 45 28 0 52
URL http://sagas.de; http://www.sagasnet.de
e-mail sagasnet[at]sagasnet.de; sagas[at]sagas.de

a joint initiative of MEDIA Programme TRAINING & Academy for TV and Film Munich

Posted by jo at 08:32 AM | Comments (0)

March 03, 2006




A project by Marc Garrett (director, creative), Neil Jenkins (director, creative & tech), Ruth Catlow (director, creative) and Furtherfield in collaboration with Arts Council England (London) (funder).

VisitorsStudio is an open, multi-user, online arena for creative 'many-to-many' dialogue and networked performance. Through simple and accessible facilities, the web-based interface allows users to collage and manipulate their own and others’ audio-visual files, and to imaginatively recontextualise existing media. VisitorsStudio provides a platform for explorations of collective creativity for both established artists and those excluded from traditional art structures, for reasons of geography or social circumstance.

In March 2006, Furtherfield will launch VisitorsStudio Version2, which incorporates new artistic tools and community building facilities, allowing users to schedule and promote their own performance programmes. These can be recorded, archived, downloaded and redistributed as screensavers. 25th-26th March "VisitorsStudio" @ Idea Store, Chrisp Street [a NODE.London event]

Participants upload content, image and sound files (JPG, MP3, SWF) to a shared database, responding to each other’s compositions in real time. Individuals can also chat with each other and are identified within the online space by their own dancing-cursors.

VisitorsStudio is a Furtherfield project collaboratively developed by artists, programmers, critics and curators, with significant contributions by audiences new to net art, and members of online art and technology forums. This platform continues to grow organically in response to their participation.

Posted by jo at 10:01 AM | Comments (0)

February 22, 2006

Jeff Han


Multi-Touch Interaction Experiments

While touch sensing is commonplace for single points of contact, multi-touch sensing enables a user to interact with a system with more than one finger at a time, as in chording and bi-manual operations. Such sensing devices are inherently also able to accommodate multiple users simultaneously, which is especially useful for larger interaction scenarios such as interactive walls and tabletops.

Since refining the FTIR (frustrated total internal reflection) sensing technique, we've been experimenting with a wide variety of application scenarios and interaction modalities that utilize multi-touch input information. These go far beyond the "poking" actions you get with a typical touchscreen, or the gross gesturing found in video-based interactive interfaces. It is a rich area for research, and we are extremely excited by its potential for advances in efficiency, usability, and intuitiveness. It's also just so much fun!

Our technique is force-sensitive, and provides unprecedented resolution and scalability, allowing us to create sophisticated multi-point widgets for applications large enough to accomodate both hands and multiple users.

The drafting table style implementation shown here measures 36"x27", is rear-projected, and has a sensing resolution of better than 0.1" at 50Hz. Stroke event information is sent to applications using the lightweight OSC protocol over UDP.

Posted by jo at 06:57 PM | Comments (0)

January 31, 2006



Cage, Deleuze and Macromedia Director

"..Director recasts the information-processing computer as a movie-making machine, or a signification engine bolstered by interactivity that can be mechanically scripted with finite options or driven by the fluid dynamics of video game methodologies. But when one introduces the program to the cinematic philosophy of Gilles Deleuze and the compositional approaches of John Cage it can become something else entirely. When interactive media is introduced to Deleuze's radical theories of cinema-thought and Cageian silence--which can be understood as the dynamics of a system that is left to "be itself"--a sort of "idle" artificial awareness becomes possible...

...In his 1937 essay "The Future of Music," avant-garde composer John Cage wrote that "the 'frame' or fraction of a second, following established film technique, will probably be the basic unit in the measurement of time" as far as the composer of sound was concerned. Cage was already prepared to think in terms of music as cinematic apparatus composed via a system driven by external events and not prescriptions. Cage was prepared to accommodate accident within structure, and was not afraid of electronic instruments. If we switch the phrase "composer of sound" for "author of media" we find ourselves in some version of Cage's predicted frame-driven future. From megahertz to refresh rates, the computing environment is a choreography of events possessing the complexity and precision of a Balinese gamelan, operating at speeds that border on the unimaginable..." From EnterFrame: Cage, Deleuze and Macromedia Director - multimedia authoring software - Evaluation Afterimage, July-August, 2002 , by David Goldberg.

Posted by jo at 10:37 AM | Comments (0)

January 20, 2006



Call for Entries: Body-image – Image-body

CYNETart_06humane is devoted to the theme of Body-images – Image-body :: the bred, constructed, staged, cultivated, homemade and self-designed human being. Work from the fields of dance & technology, performances, audiovisual compositions, installations, net art and new social and cultural strategies based on new technologies may be submitted to the competition. CYNETart_06humane takes place November 15 to 19. Deadline: March 20, 2006!

CYNETart has been showing new trends of cultural developments in the media arts since 1997. The consequences of the prevailing progress of civilisation and the introduction of new information technologies in all fields of social life lead to the disembodiment of perception triggering the examination of a new sensitivity for physical experience and for the perception of one’s own body. The international festival focuses on the changed perception of the body caused by new information technologies. At this, the performance of the new technologie is less important, it is rather about their cultural potential for mediation. This applies f.i. to linkages between existential questions of humankind and their perception. It also applies to a more sensual approach to scientific methods of research and the interconnections between the arts.

All in all, CYNETart 2006 presents three awards. The CYNETart award donated by the Stiftung für Kunst & Kultur der Stadtsparkasse Dresden is endowed with 5,000 Euros. The CYNETart award donated by T-Systems Multimedia Solutions is endowed with 5,000 Euros. The sponsorship award of the Saxon Ministry of Science and the Fine Arts is endowed with 10,000 Euros. The award-winning works are selected from the submissions by an international expert jury. Application form online. The deadline for submissions is March 20, 2006!

Organiser: Trans-Media-Akademie Hellerau e.V. and The City of Dresden, Culture Office in co-operation with the European Centre for the Arts Hellerau

You get further information on www.cynetart.de and from
Trans-Media-Akademie Hellerau e.V.
Festspielhaus Hellerau, Karl-Liebknecht-Str. 56
01109 Dresden, GERMANY
Tel. +49-351-889 6665
Fax +49-351-34 000 33

Posted by jo at 12:57 PM | Comments (0)

January 02, 2006



Pure Data networked jam sessions

PD (acronym of Pure Data) has recently emerged amongst the many software devoted to real time sampling and audio/video streaming, mostly thanks to its flexibility during live performances. It's a real time coding environment suitable for video, audio and graphic editing. Roman Haefeli has developed an environment made for facilitating electronic musicians' jam sessions on a network basing on PD. It's a client-server system, so it works on any network (internet included), and its name, NetPD, derives from this feature. But this is not intended as a platform for creating sounds, but as an environment where every client (i.e. every computer connected to a NetPD server) can share its music patches. The most interesting part is that the same patches can be played through NetPD, and this implemented feature triggers the jam sessions, welcoming all the different contributions. A further peculiarity is that you can't share sound files (even if they are embedded in a patch). On one hand this makes samples sharing impossible, but on the other hand it handles the real innovative significance of the generative music. [Vito Campanelli, neural]

Posted by jo at 03:18 PM | Comments (0)

December 14, 2005

Remote Touch


Touching over a Network

Remote Touch--by Myra Einstein and Demetrie Tyler--is also part of the ITP Winter Show 2005. In attempting to send the sense of touch over a network, 2 devices made up of a grid of "touch pixels" will be used. When one device is pressed, the force is applied to the connected device in addition to itself. When the second device is pressed back, the net forces are applied to both. This gives the feeling that you are pressing against someone through a network. Two cameras will be set up so that the user can view what the person on the other side is doing.

When a user pushes on the touch grid, each touch pixel on their own grid will move down according to the force applied and the corresponding touch pixels on the second user's grid will move up. When the second user pushes on their touch grid, each touch pixel on the first user's grid will move up. When both users are pressing on the touch grid, the net force is applied to each touch pixel, ie the touch pixels of the grid of person who's pressing harder will move down and the touch pixels of the other user will move up.

Posted by jo at 11:29 AM | Comments (0)

December 02, 2005



Move Freely in a Fully Immersive Environment

VirtuSphere revolutionizes the way humans interact with computers. The method and system provide infinite space and the most immersive experience for simulated training, exercise and gaming.

The VirtuSphere platform consists of a large hollow sphere that sits on top of a base and allows the sphere to rotate 360 degrees. Wearing a wireless, head-mounted display, users can step inside the sphere to fully interact in immersive virtual environments. The VirtuSphere enables 6 degrees of freedom – one can move in any direction; walk, jump, roll, crawl, run over virtually unlimited distances without encountering real-world physical obstacles.

VirtuSphere systems are made to client specifications and typically include an easy-to-assemble sphere, a base platform that enables it to rotate, a head-mounted display, 3D sensors, sphere rotation trackers, a computer, device drivers and 3D software applications. [via Rhizome]

Posted by jo at 11:19 AM | Comments (0)

November 17, 2005

Take Heart and flickr colr pickr


Two Visualization Tools

In Take Heart--by Monika Hoinkis--a bowl is filled with water. By taking hold of it by the handles, it becomes literally moved by the rhythm of your heart. Water becomes storage for discrete information and makes it visible. It starts pulsating with your heartbeat. When you touch the object with only one hand, the water stays calm. When holding both handles, the water starts vibrating. Your heartbeat is calculated through the handles' measurement of your skin-resistance and then assigned to the water's pulsation. The heartbeat is now stored and the water will 'beat' with this rhythm until the pulsation slowly abates. At the end the water is calm and discharged again, unless the bowl is touched by a new person to whom whose heartbeat it would set itself to. It then beats with another's heart. [via infosthetics]


an online application showing a circular color chart out of which users can pick a descriptive color category, plus a specific color & brightness. the application then shows an according collection of Flickr color-matched images. see also tagged colors & flickrland visualization for similar conceptual approaches. [krazydad.com] [blogged on infosthetics]

KrazyDad is Jim Bumgardner. He's a programmer/hobbyist with a passion for making cool graphics software, and software toys. All the software presented on his site was written by him, at home, on nights and weekends. Ditto for the website itself. Jim works programming software for the Interactive TV industry. He is also a part time instructor on various subjects, including interactive art, multimedia, various programming languages, chess for kids, lego robotics.

Posted by jo at 01:03 PM | Comments (0)

WJ-s Web Performances Tonight


Agnès de Cayeux, Anne Laforet & KRN

Centre Pompidou, Paris, Les Plasticiens du Web, Thursday, November 17th 2005; Salle de cinema 2 / 7pm / 3 euros / Metro Rambuteau.

WJ-s is a software and a flexible, high speed connexion public device for web performances which allows actors of the Internet--sound and image artists, netartists, bloggers, graphic designers, flashers, programmers, curators, hacktivists, newmedia theorists, pioneers and web mutants--to play live with the full scope of contents available in the wideness of the Web.

Agnès de Cayeux, Anne Laforet & KRN endlessly explore and experiment with the changing and fluctuating territories of the Internet. They subjectively articulate its intimate contours and zones. With "WJs", the artists play with the erotic and heated atmosphere of video chat rooms, with the world of artistic algorithms and computer art, and with an exacting choice of plastic, sensitive, aesthetic and narrative works in the realm of webart.

Agnès de Cayeux, network artist, conception and programming of network interfaces and tools. Research on distance relations and network interactions.

Anne Laforet is a specialist of net art, particularly on preservation, researcher, co-organizer of conferences and public discussions, soundartist.

KRN, network artist, curator, moderator, artistic and cultural event organiser, researcher into the relations between interactivity and desire, art and technology. Technical expertise: development of web sites and interfaces.

WJ-s/ystem : the whole technical device
WJ-s/oftware : the software part of WJ-s/ystem
WJ-s/erver : the device server
WJ-s/tudio : the computer used as a webdeck to control the WJ-s/tations
WJ-s/pring : a network of laptops that controls the WJ-s/tations during the performance
WJ-s/tation : one of the WJ-s/pring laptops (polymedia capture and diffusion)
WJ-s/pace : the venue in which the device is hosted
WJ-s/et : a WJ-s performance
WJ-s/ession : a series of WJ-s/ets

WJs project author, production: Anne Roquigny
Software programming: Stephane Kyles
Technics: Olivier Bernon
Realisation: Marc Bellini
Thanks: Christophe Becker, Thierry Bernard, Nadine Bonnefoi, Ferdinand Corte, Stephane Degoutin, Emilie Fouilloux, Anne Gagnebien, iddem.com, Emmanuel Jamin, Christophe Leclerqc, Sandrine Maricot, Denis Santelli, Sigolene Valax. Partner: ars longa

CONTACT / INFOS anne roquigny ar (at) wj-s.org

Posted by jo at 11:43 AM | Comments (0)

November 08, 2005

Proximity Lab



Proximity Lab is a participatory installation and experimental interface platform designed to visualize relationships between users and mediated spaces. The 8-foot by 16-foot walkable surface is fitted with RFID technology. Participants wear shoes fitted with RFID tags, enabling the system to track and record their positions in real-time. Images projected directly onto the floor are accompanied by stereo sound as a continuous response to the actions and interactions of participants.

Proximity Lab can discern the individual identities of participants regardless of how or where they move. Conceived as an experimental physical interface system, it allows architects with diverse intentions and aesthetic goals to create repeatable experiments in physical interaction. By Evan Karatzas. [blogged by Regine on we-make-money-not]

Posted by jo at 08:15 AM | Comments (0)

November 07, 2005

The Musical Gestures Toolbox


Bodily Movement and the Music Theory Paradigm

The University of Oslo Department of Musicology is developing the Musical Gestures Toolbox, a collection of applications with useful features for video and audio analysis. The toolbox is built with Max/MSP/Jitter from Cycling '74, and will be available as standalone applications for Mac OS X and Windows, as well as the original patches.

Features: * Video capturing from USB/FW-cameras * Playback of video files (any QuickTime-readable format) * Adjustable resolution and playback rate * Brightness, contrast, saturation controls * Zoom, offset and displacement controls * Cropping functions (drag and crop) * Automatic cropping based on contraction * Computer vision analysis * Save snapshots and image sequences * Save video from the analysis * Contraction and centre of gravity * Audio analysis: spectrogram, sonogram, spectral centroid, noisiness, loudness...

"...there are many different kinds of gestures associated with music, but it could be useful to consider gestures in view of the following three main categories:

* Sound-producing gestures, such as hitting, stroking, bowing, blowing, singing, kicking, etc. Mental images of such gestures, including the associated modes of execution such as fast, slow, hard, soft, short, long, etc., are usually indissociable from our notions of musical sound, evident in music-related metaphors (e.g. "hammering", "sweeping", "caressing", etc.) and mimicry (e.g. playing "air drums" or "air guitar").

* Sound-accompanying gestures, including all kinds of movements we can make to music such as marching, dancing, and more vague sound-tracing gestures such as following the melodic contours, rhythmical/textural patterns, timbral or dynamical evolutions, etc. with our hands, arms, torso, etc.

* Amodal, affective or emotive gestures, including all the movements and/or mental images of movements associated with more global sensations of the music, such as images of effort, velocity, impatience, unrest, calm, balance, elation, anger, etc., gestural images and concepts which are also encountered in dance.

These categories of gestures often overlap, and a gesture may belong to more than one category, e.g. energetic drumming may be perceived both as sound-producing gestures and as emotive images of joy or elation. Musical gestures thus encompass a large territory stretching from details of sound-production to more global emotive and aesthetic images of music, and also include considerations of cultural-stylistic vs. more universal modes of expression. In all cases, we believe musical gestures manifest the primordial role of human movement in music. For this reason, we speak of embodied perception and cognition in music in the sense that we as listeners relate musical sound to mental images of gestures, i.e. that listening (or even merely imagining music) also is a process of incessant mental re-enactment of musical gestures.

We believe the idea of embodied perception and cognition could represent a change of paradigm in music theory and other music related research, research which has often tended to exclude considerations of bodily movement from its conceptual apparatus in favour of focus on more abstract, notation-based elements of music. In our project, the focus on musical gestures provides us with a coherent and unifying perspective for what we see as a much needed renewal of music theory and other music research. Fortunately, recent developments within the cognitive sciences, music technology, and technologies for capturing and representing gestural data, converge to give us very favourable circumstances for this shift of focus towards musical gestures."

Posted by jo at 11:54 AM | Comments (0)

Flock Browser


This Browser Promotes Creativity

"Web browsing used to be mostly about just that: Surfing site after site for information and goods. But lately, more people are using the Internet as much to produce and share things as to consume them. A new browser called Flock seeks to address the new reality of enhanced online creativity and community.

It's a souped-up version of the Mozilla Foundation's Firefox browser, with features added to help users create Web journal entries and share favorite Web sites. Although Flock is still in an early preview mode, meaning it is crash-prone among other problems, it offers a good sense of what to expect." From REVIEW: Flock Browser Promotes Creation by ANICK JESDANUN, seattlepi.com. Also Super 'Firefox' previewed: What the Flock? by Nick Farrell, Inquirer. [via netbehaviour]

Posted by jo at 08:40 AM | Comments (0)

November 04, 2005



Software for Buildings

Arch-OS constructs a virtual architecture from the activities that take place within the Portland Square Development (Plymouth, UK). This "software for buildings" has been developed to manifest the life of a building and provide artists, engineers and scientists with a tool for developing transdisciplinary work and public art. Embedded technologies capture audio-visual and raw digital data from the building through: • the Building Energy Management System (BEMS) • computer and communications networks • the flow of people and social interactions • ambient noise levels.

This data is then manipulated (using computer simulation, visualisation and audio technologies) and replayed through projection systems incorporated into the architecture and broadcast using streaming internet technologies. Developed by the Institute of Digital Art & Technology. [blogged by Regine on we-make-money-not]

Posted by jo at 11:35 AM | Comments (0)

November 02, 2005



Dan Sandin Interview

Since the late 1960's Dan Sandin has developed artware systems integrating digitial + analog computers, customized circuits, home{brewed|built}-hardware, video games + virtualReality. Sandin, a professor at the University of Illinois at Chicago, founded the Electronic Visualization Lab (EVL), created the Sandin Image Processor (I.P.), developed the CAVE virtual reality (VR) system + various other [artware systems/technologies/projects/pieces]. His Image Processor (built from 1971 - 1973) offered artists unprecedented abilities to [create/control/affect/transform] video + audio data, enabling live audio video performances that literally set the stage for current realtime audio video art praxis.

To facilitate the open release of the plans for the Image Processor as an [artware/system/toolset], Sandin + Phil Morton created the Distribution Religion. As a predecessor to the open source movement in the tradition of free software, this approach allowed artists to engage with these hardware systems + continues to [interest/inspire] [artisits/developers]. In order to honor the innovative {recent futures|parallel hystories} of the Image Processor + the Distribution Religion, criticalartware has converted the deadTree Distribution Religion into a single PDF file + a web-based version, for release to the {criticalartware} community.

criticalartware interviews Dan Sandin, [discussing/illuminating] the community + development of the early moments of video art in Chicago, artware, performing live audio video, virtual reality, open source, righteous NTSC outputs, the video revolution + the changes + similarities that [bridge/differentiate] then && now. criticalartware freely offers this interview as {text|audio|video} data to be downloaded via the interweb + exchanged as shared cultural resources.

Posted by jo at 10:22 AM | Comments (0)

October 25, 2005



Portable Augmented Reality

Computer monitors are by no means an endangered species, but it is becoming increasingly apparent that they will soon be replaced for many uses by floors, walls and table tops. The latest evidence: a Microsoft Research scientist has developed a projector and computer vision system dubbed PlayAnywhere that projects interactive computer-generated images without the need for specially mounted cameras.

Researchers have been reducing the cost and complexity of the augmented reality systems in recent years. (See PCs augment reality, TRN June 26/July 3, 2002). The PlayAnywhere system goes further by packaging the components into a single portable unit that doesn't require calibration. The system consists of an NEC tabletop projector, an infrared light source, an infrared camera and a computer. The device projects a 40-inch diagonal image onto the surface it stands on.

Computer vision techniques allow users to use their hands to move, rotate and scale projected virtual objects. The system tracks shadows to determine where fingertips touch the surface; frame-to-frame pixel-level changes determine hand motion. The system also keeps track of sheets of paper in its view and can project images onto them.

The projector system could be used for games, educational software and other interactive graphical computer applications.

(PlayAnywhere: A Compact Interactive Tabletop Projection-Vision System, Symposium on User Interface Software and Technology (UIST 2005), Seattle, October 23-26, 2005) [posted on Technology Research News Roundup]

Posted by jo at 12:12 PM | Comments (0)

Two Thousand + SIX


Conference on Performance in Mediated Environments

Two Thousand + SIX: A new international conference focusing on performance in technology mediated environments will kick off as part of the 2006 Sonorities Festival of Contemporary Music (see related post), hosted at the Sonic Arts Research Center, Queen’s University Belfast. The festival is the longest-running new music festival in Ireland that presents cutting-edge new music and features some of the most thought-provoking and controversial musicians.

The "mini-conference" aims to reflect the innovative approach of the festival and will, for the first time, bring together performance practitioners (from music, dance, theatre and new media), composers, and theorists in order to discuss a wide variety of topics on performances that are informed by new technologies.

The "mini-conference" is planned for Saturday, the 29th of April 2006 at SARC in Belfast. Paper sessions will take place in the morning and the afternoon, with two keynote speakers, a lunchtime performance and an evening concert.

Keynote Speakers:

Susan Broadhurst: Sue is a writer and practitioner in the creative arts. She is currently Subject Leader of Drama Studies at Brunel University. She is the author of " Liminal Acts: A Critical Overview of Contemporary Performance and Theory" (1999); "Digital Practices: A Critical Overview and Neuroesthetic Approach to Performance and Technology (forthcoming, 2005), and "Performance and Technology: Practices of Virtual Embodiment and Interactivity (forthcoming, 2006). Sue is also co-editor of the Body, Space & Technology online journal.

Ben Watson: Wire contributor, Trotskyist and Zappaphile, Watson has written extensively on themes such as pop culture, situationism, punk, Adorno, Frank Zappa and improvisation. He is the author of "The Negative Dialectics of Poodle Play", “Derek Bailey and the Story of Improvisation”, and “Art, Class and Cleavage”.

Call for papers/presentations:

For this one-day conference we invite proposals for papers that reflect on performance and technology. As this is a short event, a maximum of 16 papers of 20 minutes duration (plus question time) will be accepted. Abstracts (max. 350 words) are due by the 15th of December 2005. Presenters of accepted papers/presentations will be informed by the 15th of January 2006. The registration for the one-day event will be £40 (£15 unwaged). This includes free access to all Sonorities Festival events on the 29th April 2006.

All accepted papers will be published online.

Further information on the conference will be posted in December 2005 on the SARC website.

Submissions and all queries should be directed to:
f r a n z i s k a s c h r o e d e r

Posted by jo at 11:40 AM | Comments (0)

October 07, 2005



rapid response printing tool

The PixelRoller project is a collabortive investigation into a new creative tool between Stuart Wood MA (RCA) Interaction Design and Florian Ortkrass MA (RCA) Design Products. PixelRoller is a paint roller that paints pixels, designed as a rapid response printing tool specifically to print digital information such as imagery or text onto a great range of surfaces. The content is applied in continuous strokes by the user. PixelRoller can be seen as a handheld “printer”, based around the ergonomics of a paintroller, that lets you create the images by your own hand.

Ever increasing technological boundaries have removed the user from the creativity of the printing process. This separation diminishes the possibility of creative and live input during output.Conventional Printing is usually bound to a certain location, output method and process. Each printing process is very specific for each single application and the technology has become less adaptable and inflexible for the end user. Alternatively manual painting requires certain skills, time and is limited in accuracy. However the ability to print manually has advantages in terms of influencing the output, engaging with the medium and the freedom of not being limited to one substrate. [via neural.it]

Posted by jo at 11:09 AM | Comments (0)

September 30, 2005

International SMS Chatwall


Outdoor Public Chat

The SMS projector system developed by Haase & Martin GbR is published in an English version. Short text messages, sent by any mobile phone, are received by the SMS Chatwall and shown on TV and by projector systems. Being developed for German speaking countries first, including Austria and Switzerland, the SMS Chatwall is offered to the international event sector now.

The SMS Chatwall receives short text messages, sent by any mobile phone and shows it on TV and projector systems. It is a new media event and promotion tool, separated into an administration and a visualization module. The administration module allows to look over incoming text messages before broadcasting them via projector system. All messages can be accepted, declined or edited by the administrator. Additionally all messages can be accepted automatically. Misuse is prevented. Together with the event organizer the administrator can display own messages or pictures to support an event with multi media screen content. [via]

Posted by jo at 12:14 PM | Comments (0)

September 28, 2005



Posted by jo at 08:15 AM | Comments (0)

September 21, 2005

Play-Doh as Interface


Fuzziness in a World of 1s and 0s

"Analog control of digital devices and media has always appealed to [Brendan Dawes]. Like the controllers for a PS2, they allow degrees of fuzziness in a world made of very strict ones and zeroes. Life is not black and white, so for deeper interactive experiences we need to look at control devices that allow our analog thinking brains to influence and control this digital world.

Play-Doh is a fantastic analog material. It’s the total opposite of the ones and zeroes paradigm. So I took the idea of exploring Play-Doh as a fuzzy interface, looking at various ways to use this tactial, fun “interface” and use it to control digital media. In this example the amount of Play-Doh on screen dictates how fast the film plays. The amount of Play-Doh on screen dictates the speed of the film.

As I twist the Play-Doh and take bits away, the film reacts accordingly in real-time. Add too much Play-Doh and the film rapidly speeds up. An intimate connection is made between the user and the media. Every action has a reaction in the digital space. No scary buttons to press. No instructions to read. It’s just Play-Doh."

Posted by jo at 05:30 PM | Comments (0)

September 19, 2005

Troika Ranch


Live-I Workshop

New York City based dance theatre company Troika Ranch returns to FACT to present its live Interactive (Live-I) workshop, an intensive seminar for artists and advanced students who want to explore the use of interactive computer technology in the creation and performance of dance, theatre, and related live artworks.

Led by composer/programmer Mark Coniglio and choreographer Dawn Stoppiello, Artistic Co-Directors of Troika Ranch, the workshop gives the participants the opportunity to experiment with technological tools that allow their gestures and vocalizations to interactively control video, sound, light and other computer controllable media.

The three day workshop at FACT will serve as an intensive introduction to the software and hardware, with special emphasis on Isadora®, the real-time media manipulation software created by Coniglio. This work will be complemented by critical discussions regarding how the use of media and technology shapes the form and content of the artwork.

FACT, Liverpool; 31 October - 2 November 20053 days, 10.30 - 5pm, cost £105 + VAT.

Posted by jo at 10:00 AM | Comments (0)

September 12, 2005



Collaborative Mapmaking

Map-o-matix--by Yves Degoyon, Sergio Moreno, Jaume Nualart and others--is a collaborative environment for creating and editing maps, the maps can contain physical elements (oil resources, telecomunication antennas...) as well as abstract ones (human organisations, web sites, rss, ... ) and relations between these entities. Behind this tool is the idea of editing and publishing tactical maps, maps that are not only physical but also the maps of control structures and of localized social infrastructures. [via]

Posted by jo at 10:49 AM | Comments (0)

Interactive Billboards


What's afoot for advertising?

REDWOOD CITY, Calif. -- This was Matt Bell's idea of fun: Take a semester off from his undergraduate studies at Stanford University, and go to Cambridge to help out with research at the Mitsubishi Electric Research Lab, a group of forward-thinkers who develop innovative display and interface technology. In his free time, Bell would sit in on lectures At MIT's Department of Electrical Engineering and Computer Science.

During one lecture, Bell found his mind wandering, and as the professor walked in front of a screen, the light beam from the LCD projector blanketed his body. Bell wondered whether it would be possible for a person's body to have an impact on the projected image. ''I got this image of waves rippling off of him as he crossed the screen," Bell says. ''I started thinking, 'How would I make that happen?' "

Five years later, having finished his studies at Stanford, Bell is chief scientist for Reactrix Systems, a Silicon Valley company that has attracted $23 million in venture capital funding. In the company's office, amidst the cubicles, Bell stands just outside the edge of what looks like a white plastic dance floor, a rectangle of about four by six feet.

A super-bright LCD projector mounted on the ceiling casts images onto the floor. There's also an infrared camera nestled next to the projector, watching to see who does what on the floor. Suddenly, an image of a soccer field is projected, with the Adidas logo plastered on everything. When Bell ''taps" the image of the ball with his toe, it glides across the floor, and into the goal I'm supposed to be guarding.

There are other games, like Whack-a-Mole and ice hockey, all bedecked with sponsor logos. Some of Reactrix's ''spots" aren't really games, but rather interactive animations. ''Touching" a kernel of Orville Redenbacher popcorn with a foot or hand makes it explode into a puffy white morsel; ''nudging" a Hot Wheels car makes it change direction, sometimes crashing into others on the track.

''Most advertising forces itself on people," Bell says. ''It takes up your time without asking permission. We thought that if we made advertising fun, people will be more interested in it."

And Reactrix's interactive billboards -- floorboards? -- are fun, especially when they lure several people into playing together in a public space. At a moment when people are buying digital video recorders and subscribing to satellite radio to escape commercial messages, this is advertising that people will want to engage.

The next step for Reactrix is to roll out more displays, which it calls Reactors. Chief executive Mike Ribero says 300 will be deployed this fall, in locations like malls and movie theaters. (The closest ones to Boston now are in Manhattan's Times Square, in a Toys ''R" Us store, and in an AMC multiplex.) An advertiser will pay $4,000 a month to have its spot played at a single location, as part of a rotating group of 10 spots.

Ribero says that because users are physically involved, they have a better recollection of the brand names than if they had simply strolled past a poster or billboard. ''Once we get to 1,000 locations, then we'll have a network that can compete effectively against broadcast and cable," he says.

Ambitious goals for a company that was inspired five years ago by a tedious MIT lecture.

Video revolution

Entrepreneur Rob Frasca has been thinking about how to intelligently place ads into online video. He has been holed up at Highland Capital Partners' first-floor incubator space in Lexington, along with three fellow Lycos alums.

Bob Davis, who founded Lycos and was its chief executive, and is now a partner at Highland, offered Frasca the space, but Davis hasn't committed to making an investment in the start-up, I-Video Works.

That's fine by Frasca, who says, ''If you raise capital too early, you give away too much of the company." In the past, Frasca has started and sold companies to Intuit and Lycos.

Frasca notes that more people now have broadband connections than dial-up, which is allowing them to consume more video over the Internet. ''We're working to create tools to get video up on the Web, stream it, put ads in it, monetize it, and manage it," he says.

Of particular interest are tools that will allow users to recommend videos to one another, and techniques for inserting ads based on who is viewing a video, when they are watching, and on what device.

''With a traditional television show, everyone watching sees the same ad," Frasca says. He's thinking about delivering different ads within a snippet of video, depending on whether it's viewed on a PC or a mobile phone, for instance, or whether the video is being seen at noon or midnight.

Veteran firepower

Like Frasca's band of ex-Lycosians, Art Technology Group cofounder Joe Chung has pulled together several veterans of that e-commerce company to work together on a start-up. Allurent, headquartered in the old Wordsworth Books building in Cambridge, is announcing its launch today at the Shop.org trade show in Las Vegas.

The company was started last September in Chung's Cambridge dining room. He says it focuses on helping online retailers ''provide a better customer experience" by supplying a suite of ready-made software modules. The modules will be based on Macromedia's Flash technology, which renders web pages more fluid and interactive. The first releases will be a shopping cart and a check-out system.

Several of the same angel investors who initially backed Art Technology Group are helping to fund Allurent, including Bob Forlenza of Tudor Investments, Jeff Newton of Gemini Investors, Scott Jones of Gracenote, and Bob Maginn of Jenzabar. Media Lab founder Nicholas Negroponte and Teradyne founder Alex D'Arbeloff are also helping to fund the 12-person start-up.

What's afoot for advertising? by Scott Kirsner, @LARGE [The Boston Globe], September 12, 2005.

Posted by jo at 10:30 AM | Comments (0)

September 06, 2005

HIVE Networks Workshop + Launch


DIY Kit for Ubiquituous Computing and Free Networking

Alexei Blinov of Raylab and a group of collaborators have set out to create an exciting project, HIVE Networks, which promises to change the perception of ubiquituous or pervasive computing. HIVE combines the virtues of free software, free networks and open hardware to generate a framework for virtually any type of networked media application.

The group of independent programmers, artists and electronics specialists imagine swarms of intelligent network devices which all collaborate, facilitate media applications such as audio and video streaming and create clouds of free bandwidth using ad-hoc networking protocols.

The hardware basis currently is a customized Asus wireless hard drive. The firmware has been replaced with Linux and a cross-compilation tool kit has been developed. What this means is that any sort of application can be made to run on the cheap hardware which costs about 50 Euros a piece. For instance, a responsive environment could be built, using sensors, light, audio and video; or points of information exchange using wlan and bluetooth can be installed spread out in urban areas, because the individual units are cheap and replacable.

Blinov is keen to point out that the Asus boxes currently used are only the first step towards creating a framework for HIVE applications. The development is based on a set of key principles. The hardware has to be as cheap as possible and function according to widely used generic standards. The software has to be completely free and open source. Everything else is left to the creative imagination.

Raylab invites media practitioners to participate in application development for HIVE networks. Now that the basic toolkit has reached a phase of stability with increasing maturity, media practitioners - media activists, community groups, artists - are invited to come up with ideas for applications which Raylab will try to facilitate.

Ideally, in the next phase, this should all be done via a web based interface. The media practitioners do not have to get involved in deep technological development. All they need to do is click a few radio buttons to customize their own application which then will be compiled and installed on the device.

With HIVE Networks the traditional role, the artist who has an idea and gets a technician to develop the technical part, gets reversed. Here, creative technicians point out the future developments and offer a unique platform for artists. With HIVE ubiquituous computing takes on a new meaning. As the black box of technology is cracked open and made accessible, virtually anything is possible.

WORKSHOP: HIVE Networks - Swarms of information generating and processing devices

Friday, 09.09., 14.00 - 18.00

chip shop: application customization

Media practitioners, artists and developers are invited to share their visions of future applications for HIVE networks.

Both presentation and workshop are free. Please register if you want to participate in the workshop via email to: hive[at]kunstlabor.at

Posted by jo at 11:51 AM | Comments (0)



Blending [Distributed] Participating spaces

Telemurals is an audio-video connection where a communication space is created by breaking apart the pixels and speach of the participants at each end and reassembling them abstractly. The initial setup is straightforward. Two disjoint spaces are connected with an audio-video wall. Video and audio from each space is captured. The two images are then rendered, blended together, and projected onto the wall of their respective space. The difference between Telemurals and traditional media space connections are the image and audio transformations that evolve as people communicate through the system and the blending of the participating spaces.

Participation is required for this communication space to work. To reinforce a sense of involvement, we provide the system with some intelligence to modify its space according to certain movements and speech inflections. First, the image is rendered non-photorealstically. Second, words spoken in both spaces are captured, converted to text, and rendered on the screen in blocks left to fade away over time. The immediate feedback of seeing one’s spoken word alter the window lets them know they are adding to and affecting the shared environent. More complicated image manipulations are affected by changes in pitch and volume of the voice.

Much research has been conducted on the benefits and limitations of adding video to an audio communication channel. One of our goals here is to see how using non-photoreastic images influences conversation. Adding video to a communication channel improves the capacity for showing understanding, attention, forecasting responses, and expressing attitudes. A simple nodding of the head can express agreement or disagreement in a conversation. Gestures can convey concepts that aren’t easily expressed in words; they can express non-rational emotions, non-verbal experiences.

Yet these cues are not always properly transmitted. There may be dropped frames, audio glitches. Lack of synchronicity between image and audio can influence perceptions and trust of the speaker at the other end. Other challenges include equipment placement. For example camera placement has long been a reason for ambiguous eye gaze in audio-video links. A large camera offset gives the impression that the person you are speaking to is constantly looking elsewhere.

With Telemurals, we are creating an environment where rendered video maintains subtle cues of expression such as posture and hand motion, yet also enhances other cues. For example, changes in voice volume and pitch may be represented in the graphics of the rendered video. Audio conveys the speach inflections in conversation by altering parameters of the rendering. By adding another layer of abstraction into the video stream, we can enhance cues in a manner that is not possible in straight video streams.

Abstracting the faces is of big concern in the rendering. It builds on the intrigue of how one would appear if transplanted into this filtered space. Caricature and cartooning are such types of abstractions. There are elements of curiousity, humor, and surprise when seeing a caricature yet there is enough resemblance to make a personal connection.

The creation of an abstract environment in Telemurals has some similarities to virtual spaces in online worlds. Examples of fantasy environments for conversation can already be seen in online graphical communities. One essentially puts on a mask and enters an electronic communication space. This element of mediation removes many of the social barriers by allowing one to safely enter a stimulating world. Just as this lack of concrete identity may aid in suddenly and continually confronting new people in virtual environments, it may help in our physical-virtual hybrid. It has the added benefit of mitigating the perception of offset eye gaze and the effects of unsynchronized audio and video.

In this project, the abstraction of person, setting, and feedback provide the social catalyst for the experience. This new wall created by filtering creates an ice-breaker, a common ground for interaction, and a object for experimentation. How will one communicate in this abstracted space? How will their behavior affect their appearance and the appearance of the setting? How different is communication using photorealistic vs. non-photorealistic video? The goal here is to create a new styles of movement and speach interaction by providing a common language across the two spaces.

The first Tele-mural installation will connect two MIT graduate dormitories, Ashdown and Sidney-Pacific. The Tele-mural in Ashdown is located in the lower level in the common reacreation room. In Sidney-Pacific, the Tele-mural will be placed near the main entrance. This connection came about as the under-construction Sidney-Pacific dormitory committee was looking to put public art in its public areas and create spaces to encourage students to gather. Ashdown, the oldest graduate dormitory on campus was similarly undergoing renovations to create public spaces for social gatherings and the two dormitories were open to the idea of linking the older dormitory to the newer one. The sites within the dorms were chosen because they have high levels of traffic, are openly public , and because a large video wall aesthetically blends into the space.

Posted by jo at 10:52 AM | Comments (0)

September 02, 2005

Unusual Encounters


Chance Encounters

Unusual Encounters is a three-part, web-based art project in which Jean Paul Gaultier has had the pleasure of participating and which is a mixture of creative sophistication and the latest technology, unique in the world, viewable at this address. Designed by the film director David Mileikowsky, under the artistic direction of Jean Paul Gaultier, the Unusual Encounters site proposes 3 original modules, offering 3 unique meeting experiences:

On September 9th, an on-line artistic improvisation will be organised for 500 trendsetters around the world and we have the pleasure of inviting you to take part in this exclusive venue. On that day, "Zhang will meet Veng" and for the first time ever, the unusual encounter between two unrivalled virtuosos, Maxim Vengerov and Zhan Yimou, the Russian and the Chinese, the violinist and the film director: two meteors of the Modern Arts will be brought together online in a live artistic performance, somewhere between China and Germany. Some 9000 km apart, the amazing duo will perform on-line, in unison. The first showing is scheduled for September 9th, at 12:00 exactly (Paris time).

You might also like to try Tango Painting, available as of today! For the first time ever, this multi-user graphic application allows user pairs to simultaneously draw up, from a distance and in an entertaining way, multimedia forms, regardless of their skill level. Their work can then be complemented by creations from artists such as Jean Paul Gaultier and Jean-Baptise Mondino, then shared with others on the web.

Finally, on September 10th, The Perfume Alchemist will be presented in an on-line world premiere. This visionary manga by Alain Escalle (director) tells a story of a child, a little prince of the streets, in a high-rise megalopolis, who decides to create the perfume of his dreams. Thanks to "Manga Maker", an on-line video editing platform, every user can participate in the adaptation of the course of the story and then pass the modified story on to his/her soul mate. An extract of this manga is available here.

Posted by jo at 10:25 AM | Comments (0)



Exploring the City with Digi-Diviner

URBANtells is a mobile sound installation in which the user explores a neighborhood on foot, "data-diving" with a handheld device called digi-diviner. Participants will provide their cell phone and email address to an attendant at a kiosk. In return, they will be given a digi-diviner to walk and explore the neighborhood. A minute or so after they go outside a real time mix of sound art and verbal information triggered by their location will start to play through an earbud attached to the diviner. The information is a mix of recordings of residents and historians, text-to-speech synthesis, recitations, musique, and other processed sound. The information will address the complex layers of personal and collective histories and experience in urban environments, and the degrees to which these acoustic "tells" influence behavior and development within a community similarly to traditional, physical structures.

Users will be able to upload sounds, text, and still images they capture with the diviner en route. This information will be available to other users. Upon returning to the kiosk, participants will receive an interactive Google map of their specific walk via email, containing buttons to play sounds and view images they may have uploaded during their trip. Urbantells, a project by James Rouvelle, Joe Reinsel and Steve Bradley, opens June 2, ‘06 at Art Interactive, Cambridge, MA. [blogged by Regine on we-make-money-not]

Posted by jo at 08:30 AM | Comments (0)

August 31, 2005



Virtual Instruments for Live Performance

Repercussion.org is a collection of on-screen virtual instruments designed for live performance and online user exploration. Rhythms and musical phrases can be dynamically composed and altered based on the arrangement of objects on the screen.

The collection is a showcase for the visual representation of freely-formed musical compositions. The project aims to be engaging enough to encourage the viewer to experiment, while being sophisticated enough to allow for a great deal of variation in the sound being created. The actual musical compositions are generated in real-time from a library of pre-recorded midi-based sounds, employing algorithms to map the on-screen graphics to characteristics of the sound arrangement such as pitch, tempo and volume.

The graphics were designed with a wide audience in mind, such that the concepts represented are accessible to a viewer regardless of his level of musical training, while at the same time matching the artist's idea of a basic kit of musical instruments for “jamming”: some drums, some melodies, a way to create chords, strings, etc. To be available to the largest online audience, Macromedia Flash was chosen as the medium. The piece is a work in progress that will be updated and customized. [blogged on media·teletipos]

Posted by jo at 11:19 AM | Comments (0)

Mediamatic Interactive Film Lab @ Budapest


6-day Workshop

The Intermedia Institute of the Budapest Academy of Fine Arts is going to host a Mediamatic Interactive Film Lab. Intermedia has promoted the contacts of art and technology in the past years, building up an extensive network in Hungary and Europe. As part of its diverse interest in non-linear modes of narration, the Intermedia Institute invites participants to share experiences and create prototypes for online interactive film projects.

The workshop is designed for up to 16 film-, tv-. radio-, or new media makers from all over Europe. You can use your own footage (up to 40 min) to make an interactive film and publish it on- or offline. The approach is interdisciplinary and introduces new media and cross media content production to people with a background in filmmaking.

The workshop focuses on the cinematic aspects of online interactive film and on issues of combining narratives with interactive possibilities. Participants use the ever-evolving Korsakow software to build their new media projects. Developed by the UdK Berlin and Mediamatic, Korsakow is a powerful, elegant, and easy to learn editing tool.

This 6-day workshop takes place from October 3 - 8. It starts with a symposium and ends with a public presentation of workshop projects.

The price of the workshop is 200 EUR for EU citizens of the new memberstates, and 300 EUR for EU citizens of older member states. Mediamatic provides fully equipped Apple Computers for this workshop.

For more information go to www.korsakow.org or call Klaas Kuitenbrouwer: +31 20 6389901.

You can register online.

To prepare for the workshop we provide an online reader with related articles on interactivity and narration.

Posted by jo at 09:10 AM | Comments (0)

August 30, 2005

TeleSon: Invention#8 by Chris Brown


Networked reacTables

TeleSon: Invention#8 is composed for four musicians playing two networked "reacTables", one each in Barcelona and in Linz, Austria (ICMC2005). The reacTable is a new electronic music instrument developed at the Music Technology Group in Barcelona by a research team headed by Sergi Jorda. Brown explains: 'I have interfaced my own synthesis engine, written in SuperCollider3, to the reacTable's visual and gestural interface, thus composing my own synthesis environment with its own rhythmic and tonal character. Like the other works in my "Inventions" series (recorded on Pogus CD "Talking Drum"), this piece involves polyrhythms, computer networks, and improvisation. Like "Invention #5" (1999), it also involves musicians collaborating at great physical distance from each other'. See Sounding the Net: Interview with Chris Brown by Peter Traub.

Chris Brown, composer, pianist, and electronic musician, creates music for acoustic instruments with interactive electronics, for computer networks, and for improvising ensembles. Recent recordings of his music are available on Tzadik, "ROGUE WAVE" (2005) and "LAVA" (1992); and on Pogus, "TALKING DRUM" (2001). He has also performed and recorded extensively with "The Hub", "Room", and the "Glenn Spearman Double Trio". As a pianist he has recorded music by Henry Cowell, Luc Ferrari, and John Coltrane; as an electronic musician, he has collaborated with John Zorn and Wadada Leo Smith. He teaches electronic music at Mills College in Oakland, where he is Co-Director of the Center for Contemporary Music (CCM).

Posted by jo at 11:24 AM | Comments (0)

Hard Drive Orchestra


Radio Taking Place Between the Ears and Behind the Eyes

Hard Drive Orchestra--by Fundamental Radio (August Black and Markus Seidl)--is a sculpture and instrument that can be programmed to run autonomously as a stand-alone installation or played live and intuitively in a performance taking control commands via OSC, midi, or some other input protocol.

The hardware component consists of a mobile computer containing 8-16 IDE and SCSI hard drives of various ages of computing history and therefore of various sonic properties. The mechanical sounds of the drives are recorded live with 8-16 separate contact microphones, cleaned by a noisegate, and amplified over loudspeakers.

The software component consists of two objects written in C for PureData. One object performs very specific read-write operations on the various hard disks in order to force the disk's head to move from the innermost part of a drive's platter to the outermost part and back again. This results in an audible grinding click. The second object reads the data being written on each disk and blits it to the video screen. Essentially, the first object allows one to control the sonic activity of each drive while the second object simply displays the data to screen.

With the Hard Drive Orchestra, Fundamental Radio can analyze and present the physical, mechanical, and most importantly, the aesthetic properties of stored information. The HDO has premiered with "The Physical Mechanism of Forgetting" at the Freud Museum in St. Petersburg [Bolshoi Prospekt, Petrogradskaya Storona, 18a] as part of Traumlandshaften festival from October 7-10, 2003. There, the successive versions of the ****soft operating system have been deleted, slowly, over a week cumulating into a live performance of found and submitted data.

Posted by jo at 09:08 AM | Comments (0)

August 28, 2005

Improbable Orchestra + Burning Man


Tribal Knob Twiddling in the Desert

The Improbable Orchestra (IO) is an interactive audio installation designed to make digital music accessible to anyone. It consists of a central control unit surrounded by four speakers designed for up to four users to each control a sound sample using manual controls (knobs and buttons).

This sound installation was designed by our team to accomplish the following goals: 1. to create a tool to allow any user to experiment with working and designing music; 2. to create a visually appealing structure that creates a musical space; and 3. to create a musical tool that allows interaction between different players while they experiment with music.


Burning Man: Laser Harps, Improbable Orchestra Knob Boxes

For some of the most bizarre and unusual music-sound installation art, look no further than Burning Man [Related 1, 2, 3]. Burning, indeed: this desert-based event has in the past featured Eric Singer's Max/MSP-controlled pyrophone, a propane-powered flaming sound organ. (And, incidentally, that installation is making a repeat appearance this year.)

And what better activity when in the middle of the desert than tweaking knobs and producing strange electronic grooves? That's the idea behind improbable orchestra, an interactive table full o' knobs for collaborative soundmaking. Build one yourself: check out the copious design notes. Basic specs: the free Pure Data graphical multimedia development environment is running sounds, gutted Pentium PC with custom power supply, custom circuit board connecting the knobs and fiddly bits thorough a Parallax basic stamp board. (Lots more specs on their site, missing only details of the Pd patch.)

"But," says you, "I hate knobs. Give me lasers, man." Sure! You obviously want the interactive Aeolian laser harp, which suspends a series of laser beams through which you can walk to trigger sound. It's the creation of former NYU ITP faculty member Jen Lewin, now based in Colorado and doing interactive sculpture full-time. She has other fantastic projects like interactive butterflies. [blogged by Peter Kirn on Related.

Posted by jo at 10:22 AM | Comments (0)

August 27, 2005

Parasite + Parallel Worlds


Mobile Movies

Parasite--by Frédéric Eyl, Gunnar Green and Richard The--is part of the Moving Canvas series; it investigates the visual and symbolical importance of trains in an urban context and the possibility of exploiting them as brief communicative moments.

Affordable mobile video-projections could be used to re-conquer public space often only reduced to graffiti and streetart. The tunnels of a subway-system bear something mystic—most people usually have never made a step inside any of those tunnels. Parasite is a projection-system that can be attached to subways and other trains. Using the speed of the vehicle as parameter for the projected content, the projection starts with the train moving inside a tunnel.


All along their journey, travellers see images mysteriously appearing through the train windows: words, aquatic animals, etc. Confusing the routine of your train-travelling-journey, your habits and perception Parallel Worlds allows you a glimpse into a different world full of surrealist imagery.

Don't miss the video. [blogged by Regine on we-make-money-not]

Posted by jo at 10:25 AM | Comments (0)

August 25, 2005

StarChild + FlatWorld


Data Sonification

Data sonification has been viewed as a tool with great potential for studying complex sets of scientific data. Sonification can provide the researcher with the ability to perceive variations and trends that are invisible to more established data analysis techniques. However, few have explored the potential artistic applications of sonification. In 1996, Jarrell Pair worked with Alec Robinson to se Matlab and Csound to prototype software to transduce data from various sources such as images, temperature, and light intensity into aesthetically pleasing audio. We used this work to develop the audio effects for StarChild. Using Csound and a custom C program, astronomical data from the Shoemaker Levy-9 comet collision was used as input to create audio for portions of StarChild. Additionally, images of the collisions with Jupiter were transduced into audio effects using Hyperupic, an application running on a NeXT computer. 440k MP3 file (low sample rate) taken from the sample files created from the comet collision.

From July 1995 through June 1996, Pair was extensively involved in the technical development of StarChild, a multimedia opera. The opera was composed and produced by Audio Research Team director, James Oliverio. Alec Robinson and Pair created sound effects for the opera using data sonification methods we had developed as part of an ongoing team project. He was also involved in the installation, testing and evaluation of the eight channel audio steering system used in the opera.

The StarChild production team included visual futurist Syd Mead (designer for the films Blade Runner, Aliens, Tron, and Star Trek: The Motion Picture ), the internationally recognized lighting designer Lloyd Sobel, animator Steve Walker, and scientists and engineers from across the Georgia Tech Campus. Students met and worked with the guest artists in workshops, lectures, and in the production of the opera itself. An internet MBone broadcast of StarChild took place on June 5th, 1996. Two live performances followed on June 6th and 7th.


FlatWorld: The Mixed Reality Simulation Space

Since 2001, Pair has overseen the design and development of the FlatWorld project at the University of Southern California's Institute for Creative Technologies (ICT).

FlatWorld is a mixed reality simulation environment merging cinematic stagecraft techniques with immersive media technology. Current virtual environments have severe limitations that have restricted their use. For example, users are often required to wear bulky head mounted displays that restrict a person’s freedom to physically move as they would in the real world. Furthermore, a person cannot touch or feel objects in the virtual world.

This project addresses these issues by developing an approach to virtual reality simulation which allows individuals to walk and run freely among simulated rooms, buildings, and streets.

Posted by jo at 10:33 AM | Comments (0)

August 19, 2005

Hypersonic Sound


Getting Inside People’s Heads

"...Unlike traditional speakers, which scatter sound, Norris’ device streams it in a precise, laser-like beam for up to 150 yards with almost no degradation in quality or volume. (…) How does it work? The piezoelectric transducer emits sound at frequencies above the human ear’s 20,000-cycle threshold. Unlike low-frequency waves, the high-frequency signals don’t spread out as they travel through air. Yet they do interact with the air to induce a related set of ultrasonic waves. These waves combine with the original waves, interfering to create an audible signal, focused into a beam.

The applications are numerous, if not apparent: Thousands of soda machines in Tokyo will soon bombard passersby with the enticing sound of a Coke being poured, and several U.S. supermarkets will promote products to shoppers as they walk down corresponding aisles. Eventually HyperSonic Sound might enable a nightclub to play disco on one side of the dance floor and salsa on the other. Ambulances equipped with hypersonic sirens could clear the streets without waking the neighbors. Norris’ company, American Technology, sells the devices for $600." From Woody Norris has a way of getting inside your head… by David Sparrow, Popular Science.

Posted by jo at 10:12 AM | Comments (0)

Lua Player


PSP Homebrew

Whilst we wait for the PSP release here in England, ps2dev user Shine is developing Lua Player. This is good news for the homebrew community as it enables beginners to create their own games using the Lua programming language. No need to compile any code, simply edit the text file, drop your grahics and sounds into the folder and copy to the PSP.

The Lua Player site has recently been born and budding creators can download the software, see other peoples games in the gallery (including source) and read tutorials. Lua Player can also handle sound and all of the button inputs. A PC version of the player is in the works that will enable you to test the games before loading on to the PSP. I am interested this from an art mod point of view, but I can see it becoming a very big resource in the future. Unfortunately for us Europeans, Sony has put firmware 2.0 on the release PSP, so no homebrew for us yet. [blogged by Chris on pixelsumo]

Posted by jo at 07:29 AM | Comments (0)

August 18, 2005

Video Traces


Capturing and Annotating Media both Visually and Verbally

Video Traces was conceived and designed by Dr. Reed Stevens of the College of Education and has been developed and studied in collaboration with PETTT. Video Traces is a system that makes it easy to capture a piece of rich digital media, such as video or a digital image, and to annotate that media both visually (using a pointer to record gestures) and verbally. The resulting product is a "video trace": a piece of media plus its annotation--in essence, a recorded "show & tell". Traces can be viewed by their creator, exchanged with others, and further annotated for a variety of teaching and learning purposes. Video Traces provides a unique opportunity to capture embodied knowledge and educational interactions by supporting the most common ways people communicate their ideas--through talking, showing, and pointing.

The Video Traces project serves PETTT goals in a number of ways:

· Explore the interplay of technology and pedagogy in real settings: We have collaborated with individuals both within and outside of the University, and have explored educational uses of Video Traces in settings such as dance studios, architectural sites, and science museums.

· Facilitate thoughtful and innovative educational technology uses: We have found that using Video Traces prompts reflection on the part of both learners and instructors, and in several cases has inspired instructors to adopt innovative new strategies for teaching their courses and for assessing student learning.

· Make strong connections between research, design, and practice: We have used our observations about how people use Video Traces and our interviews with learners and instructors to inform the redesign of the software and generate further questions for research.

Video Traces: Rich Media Annotations for Teaching and Learning by Reed Stevens, Gina Cherry, and Janice Fournier

Posted by jo at 05:39 PM | Comments (0)

August 04, 2005

Virtual Playa Project


Burningman Cybercamps

The VIRTUAL PLAYA PROJECT is a navigable 3d digital Burningman environment using Microsoft Flight Simulator as a platform. It is intended to be an open-ended project that invites participation at various levels. It can be downloaded for home use; played on a giant screen at a Burningman event, or even be used as a design tool for a theme camp or artist wishing to plan an installation before it ever gets to Black Rock City.

The ultimate wish for the project however, is for the Virtual Playa to be the Burningman Cyber Regional. Using multi player technology, it can become a portal through which we can meet on line, and share experience with other cyber burners from anywhere in the world in real time. This takes the project from just being a cool piece of collaborative digital art, to a true meeting place for the cyber-tribe. Download it for free, copy it, send it to pals, leave it on buses, give it away as a gift.....spread the word.

Posted by jo at 07:01 PM | Comments (0)



Haptic Magical Mystery Tour

The Hyperfabric fabric-based interface lets you reach beyond the screen. This "touchable" touchscreen, made out of an elastic latex-like fabric warps like rubber, and can sense how hard your press it, where you press it, even when several people use it at once. It feels like you are going "through" the screen.

You can press, grab, twist, punch and play with the screen. It can even support your full bodyweight. The Hyperfabric screen is specially designed to communicate with a computer to generate interactive computer graphics, in realtime.

Adam Montandon and the team at HMC MediaLab created a 7 feet high Hyperfabric installation for the Artytechs parlour this summer at the Port Eliot literature festival. Taking place in a hidden dungeon underneath the stately home, visitors had the chance to come face to face with the magical and mysterious interactions. [blogged by Regine on we-make-money-not]

Posted by jo at 08:28 AM | Comments (0)

August 01, 2005

Exploration of TIME + Delay Phone


Delay as a New Communication Style

The Exploration of TIME exhibition poses the question "(i)s it "time" that produced humans, or is it humans that created "time"? "Humans are inseparably bound up with "time." Time assumes diverse, multi-layered forms in the world ranging from the micro- and macro-world of science such as the universe, living organisms, life, and materials, to the world of cultural representations such as Our explorations and experiences in a new world of time will certainly provide us with a hint for humans' future lifestyle or ways to deepen our thinking."

Delay Phone, by Ryota Kuwakubo, is a video phone system that allows users to intentionally increase or decrease the delay in audio communication. This system is designed to experience how subtle changes of the delay in audio communication may affect our inter-personal communication. It may also allow users to see how new communication styles may emerge with a certain amount of delay.

One of Kawakubo's major interests is in how technologies may change the way people communicate. R/V and PLX clearly reflect his interests in this area. Even, a knocking machine Duper/Looper is inspired by the way we communicate in a specific context: bathrooms. Imagine a machine like this inside a bathroom, which knocks back the door copying exactly the way you knocked…Related websites: vector::scan and Time Travel Exibition at YCAM (in Japanese) [blogged by manekineko on we-make-money-not]

Posted by jo at 08:03 AM | Comments (0)

July 26, 2005

Emotive Virtual Actors


Man looks at audience for 2 secs...

Emotive Virtual Actors by Ken Perlin:

INSTRUCTIONS: Using the mouse: Drag hands and feet with the left mouse button to move arms and legs + Drag with the right mouse button to change the view + Click/drag on any button, menu or slider + With "walk" enabled, click on destination to walk there + Click on "F1" or "F2" to select an actor.

THINGS TO TRY: Pull the hand away from the body to see torso compliance + Try lifting up a foot and watch him shift his weight. Then try putting the foot down again + Click on the "object" option, and watch how he holds an object as you drag his two hands around. You might want to turn on the "closeup" option to see this better...

Posted by jo at 07:44 AM | Comments (0)

July 18, 2005

Op Art + DualView LCD


"Magic" by Yaacov Agam


"DualView LCD" by Sharp

Optical Art or Op Art (1950's to 1960's) is a mathematically-themed form of Abstract art, which uses repetition of simple forms and colors to create vibrating effects, moiré patterns, foreground-background confusion, an exaggerated sense of depth, and other visual effects.

In a sense, all painting is based on tricks of visual perception: manipulating rules of perspective to give the illusion of three-dimensional space, mixing colors to create the impression of light and shadow, and so on. With Optical Art, the rules that the viewer's eye uses to try to make sense of a visual image are themselves the "subject" of the artwork.

Sharp has fused 2 LCD screens into 1 with its DualView LCD technology. The image changes depending on the viewing angle – bringing back sweet memories when Collectible Comic Trading Cards inserted a more primitive version in their sets as a rare bonus. Notice the mirror image in the picture on the left is different from what is shown. Simply amazing, I cannot wait to see what happens when such technology is implemented in desktops and notebooks in the near future!

Posted by jo at 10:44 AM | Comments (0)

July 11, 2005



Waving Goodbye to Touch Screens

Ethertouch is working with the likes of Nokia, Bang & Olufsen and Microsoft to create applications for its touch-free technology, which can sense your finger movements in 3D. The system would replace keypads or mice with non-tactile control via motions or gestures that will enable you to simply point at a desired area of a display screen and zoom in on the relevant section.

An array of Ethertouch sensors track the position and velocity of your finger or hand as it passes through the field and convert the data into a digital signal, which is then processed. This ability to measure velocity as well as position makes the technology particularly attractive to the computer games industry, where it could enable a new level of immersion in VR gaming. The touch-free interfaces could appear on the market by the end of next year.

Other Minority Report-like interfaces: Geo-spatial gestural interface, "Data-rich" environment for scientific discovery, Raytheon's, etc. [blogged by Regine on we-make-money-not]

Posted by jo at 09:32 AM | Comments (0)

July 04, 2005

Music Insects


Tool for Visual Music Performance

Music Insects by Toshio Iwai (1992). Permanent collection at the Exploratorium, San Francisco, U.S.A. These "music insects" "react" to color dots on the screen. When they pass over the dots, they trigger musical scales, sounds, and different light patterns. The user selects colors from a palette with a track ball, and paints in their path, and the insects perform the colors when they pass over them. The insects' direction can be changed with certain colors, and colors can be painted to achieve less random musical "performances." This piece is a tool for visual music performance.

Posted by jo at 07:29 AM | Comments (0)

June 29, 2005

Speeder Reader


An Experiment in the Future of Reading

"ABSTRACT: Speeder Reader is an interactive reading station built around two primary ideas: dynamic text (especially RSVP, that is rapid serial visual presentation), and the interface metaphor of driving. As words flash one at a time on a screen in front of the reader, he or she controls the rate of speed of the words with a gas pedal (up to 1850 words per minute in the current instance). Text stream selection is performed with a steering wheel. Thus, one can "drive through a book." We leverage people's knowledge of the familiar activity of driving an automobile (or, in the case of children, operating a speed-racing video game) to allow comfortable and intuitive access to a possibly less familiar world of interactive text. We emphasize the power and ease of the familiar driving metaphor as a navigation device. Speeder Reader was first installed at the Tech Museum of Innovation in San Jose, California, as a part of a larger exhibit on the impact of digital technologies on reading." From Speeder Reader: An Experiment in the Future of Reading [PDF] by Maribeth Back, Jonathan Cohen, Rich Gold and Steve Harrison.

Posted by jo at 10:02 AM | Comments (0)

June 28, 2005


Live Coding

Fluxus: act of a flowing; a continuous moving on or passing by, as of a flowing stream; a continuous succession of changes. Fluxus allows you to write Scheme scripts that create graphics live, interpreting audio and OSC input as a source of animation data. Fluxus also uses a fully featured physics library, which means you can script physical properties into objects and simulate them in realtime. Released for Linux under the GPL licence.

The built in scheme code editor runs on top of the renderer (see screenshots), which means you can edit the scripts while they are running. This allows Fluxus to be used for livecoding performances, or simply as a fast feedback way of experimenting or learning about graphics and animation. Fluxus lends itself to procedural modelling and animation, there is no model import functionality, only simple primitives. There is an expermental procedural modelling tool however, and full support for texturing and basic material properties.

Posted by jo at 10:54 AM | Comments (0)

June 23, 2005

Digital Human Body Communication


Body Area Network [BAN]

Digital Human Body Communication was first unveiled to the public. It is also called as BAN (Body Area Network), as it handles communication between devices using the human body as a medium. Electronics and Telecommunications Research Institute (ETRI) said that although only a small amount of data, such as information on a name card, can be transmitted at the moment because the data transmission speed is just to be 2.4Kbps, the speed will be improved to 1MB within the yearend.

ETRI explained that BAN can be utilized in numerous ways, such as touch based authentification service, electronic payment service, e-business card service, and touch based advertisement service. [via Telecoms Korea] [blogged by Emily on textually.org] [Related]

Posted by jo at 11:14 AM | Comments (0)

June 21, 2005

Civic Maps

A Mapping Module for Civicspace and Drupal

Here is a first pass functional mapping module for civicspace and drupal as demonstrated here: http://civicmaps.org/?q=2005_06_20_Scappoose_Events

Essentially if you were to install drupal and civicspace you could have a location blogging site up and running quite easily now. Are there any other such services in the world at all right now? I don't think so; so I do believe (at least from our limited perspective) that this is something that may be genuinely useful. Here is the site for the download: http://maps.civicactions.net#download

This is hot off the presses and undoubtably there are defects. However it is a milestone in our development work here and as is often said 'publish early and publish often'. Having reached this milestone I wanted to briefly summarize what the deeper purpose of this was since it is not documented on the site:

* Civicactions, (Anselm Hook) and quite a few others are continuing to look for ways to help energize real world community not just virtual community. The challenge is that a lot of our urban landscapes are somewhat mechanized. They've scaled up to industrial proportions but we have not equivalently developed an industrial scale voice. Although possibly a romantic view, there is the feeling that in earlier and smaller communities it was much easier to inadvertently be a part of the social knowledge commons of that community. The hope here is that rich client side mapping tools that are dispersed over many many servers (instead of centralized) will help create a more true, richer and authentic voice than what we get from the commercial mapping sites.

* This work although free is actually quite expensive. It is time consuming and labourious to develop for very little financial gain. At the same time if somehow services like this could 'raise all ships' then perhaps we might not have quite so much a need for money in our society. Money is a substitute for awareness. As local currencies show one undervalued function of money is as a discovery tool or a catalyst. If I were better connected to my own physical neighbourhood I might be able to message all my neighbours asking who could lend me a ladder, or a boat or some such thing - rather than purchasing one or renting one. Or equivalently I might have a lot of free time on my hands and could volunteer for some fun local neighborhood project if only my peers knew I was interested. For many people, especially in mobile urban societies, it is hard to have that kind of groundedness and trust. Tools that accelerate that process, that show us our landscape, our peers, where our friends are or have been, the knowledge that they shared... well these are all ideas that are out there and have been discussed quite a bit - but it still doesn't feel as if they have really been exercised. So this is more of the same in a sense; continuing to try to push out tools that help get computers to come out and play with us, and help us in the real world, instead of sucking us into a virtual contextually disjointed reality. If 80 percent of human discusion involves space and location then 80 percent of our tools should support this.

We definitely appreciate feedback and even flames - post your thoughts on the http://civicmaps.org site. This process is driven by user feedback and interest. Continually in open source development one is trying to pick the absolute best project and absolute best use of time that one can possibly pursue. Feedback on this particular project will dictate if it gets more attention and time.

Anselm Hook [originally posted to locative]

Posted by jo at 09:07 AM | Comments (0)



Become a Walker

rixome is a network and a tool that turns mobile screens into windows that show the virtual and public dimensions of our point of view. A walker (a rixome user) can see on his/her mobile phone/PDA/laptop screen the virtual interventions that have been added to the location where s/he now stands. For example, a spoken message can be left on a given location for other "walkers" to hear through headphones whenever they pass by. The message can also be written, or it can be a 3D animation or image, a photography, a drawing, a video.

Remote rixome users can also check vía Internet the traces left by others but s/he won't be able to add an intervention similar to those published in situ. Developed by gelo for his at Master of Art and New Technologlies at the Universidad Europea de Madrid. [blogge by Regine on near near future]

Posted by jo at 08:54 AM | Comments (0)

June 03, 2005

Technologies of Cooperation: A Map To a Toolkit


Tools to Alleviate Suffering...Liberate...Create

"What we are witnessing today is the acceleration of a trend that has been building for thousands of years. When technologies like alphabets and Internets amplify the right cognitive or social capabilities, old trends take new twists and people build things that never could be built before. Over time, the number of people engaged in producing new things has grown from an elite group to a significant portion of the population; at the same time, the tools available to these growing populations have grown more powerful.

Today's technologies of cooperation are practical tools for organizing people and solving problems that we face right now. But they are also harbingers of new forms of social and economic organization -- forms that may help resolve some of the complex social dilemmas that confront the world. So each example of a cooperative technology is also a model for thinking about future social forms as well as future tools; each example embodies principles that can help us think more strategically." From Technologies of Cooperation: A Map To a Toolkit by Howard Rheingold, theFeature, Jun 02, 2005.

Posted by jo at 01:50 PM | Comments (0)

May 04, 2005



Next Gen VR Modeling

A giant leap forward in the realism of virtual reality (VR) may be just around the corner as a team of European researchers near the completion of a pioneering project to add textures, lighting effects and ‘feel’ to computer-generated 3D models. Launched in 2002, the RealReflect project was the first attempt to use a new image acquisition technique known as Bidirectional Texture Function (BTF) that captures the look and feel of different materials. [...]

"RealReflect is a major advancement over traditional virtual reality modelling, which basically relies on simplifications of reality by describing optical properties of a surface by a 2D matrix of data that does not show the real effects of lighting...Traditional virtual reality modelling, despite its name, lacks the feeling of reality and is a poor representation of it because the way things look highly depends on how they are illuminated and from what direction they are being viewed."

By taking those two aspects – lighting and viewing direction – into account, the RealReflect system is capable of acquiring and rendering in VR even the most subtle textures, from leather on a car seat and wood panelling on a dashboard to metallic paint or chrome on door handles. Textures can be acquired from physical samples and then rendered onto the 3D models. [Link to full image of gearshift on right modeled by Uffizi lighting] [...]

That in turn brings with it additional complications. In order to be able to realistically represent textures the system requires a thousand times more data than other VR modelling tools, leading the project partners to develop compression techniques for the BTF information. The compression allows the models to be viewed and worked on in real time. [...]

The project also developed methods to take a small acquired sample of a material and multiply it seamlessly on a 3D model, which when viewed would show not only the texture but also its appearance under different types of illumination from different angles.“

[Information Science Technologies]
[Demos and Downloads at RealReflect] [blogged by John on Ratchet Up!]

Posted by jo at 10:57 AM | Comments (0)

April 26, 2005

Video Conferencing Software as a Performance Medium


Is There No There There?

"ABSTRACT: This paper surveys past performances in which the author collaborated with several other dancers, musicians, and media artists to present synchronized co-located performances at two or more sites. This work grew out of the author's participation in the landmark computer music ensemble, "the HUB". Each of the various performances were made possible by an evolving array of video conferencing hardware and software. These will be discussed. The problems and interesting side effects presented by latency and dropouts are a unique part of this performance practice. Leveraging the concepts of shared space, video and audio feedback generate evolving forms created by the combinations of the space, sounds and movements of the participants. The ubiquity of broadband Internet connections and the integration and constant improvement of video conferencing software in modern operating systems, makes this unique mode of performance and essential area of research and development in new media performance." From Video Conferencing Software as a Performance Medium by Scot Gresham-Lancaster.

Posted by jo at 10:31 AM | Comments (0)

March 24, 2005

Art Interface Device


Tool for Responsive Environments

The Art Interface Device (AID) is a microprocessor platform for building electronic installation art. It can also be used as an interface between such artworks and computers. It can enable artworks to respond to people and the physical environment.

The associated AID web site serves as a library of "worked" solutions to common electronic sensing, control and interfacing requirements. From this site the AID user community can download and contribute instructions, parts lists, circuit diagrams and other information relevant to the creation of electronic art. The AID system is distributed under the General Public Licence and developed by the AID user community.

Posted by jo at 12:07 PM | Comments (0)

March 21, 2005



RASTA Software

dyne:bolic is shaped on the needs of media activists, artists and creatives as a practical tool for multimedia production: you can manipulate and broadcast both sound and video with tools to record, edit, encode and stream, having automatically recognized most device and peripherals: audio, video, TV, network cards, firewire, usb and more; all using only free software!

You can employ this operating system without the need to install anything, and if you want to run it from harddisk you just need to copy a directory: the easiest installation ever seen!...dyne:bolic is RASTA software released free under the GNU General Public License. This software is about Digital Resistance in a babylon world which tries to control and market the way we communicate, we share our interests and knowledge. Read the manual here.

Posted by jo at 08:18 AM | Comments (0)

March 16, 2005

Locus Experimental Social Interface


You Are How You Act

Locus Experimental Social Interface is an instant messenger, social experiment, and art piece that is built on the premise that you are how you act. By examining how you and your friends talk with each other, Locus automatically tries to find likenesses between your friends. You can then see these potential relations mapped out for you! You can watch your social web grow as you speak.

And trying out Locus:ESI is simple, Locus adapts to your life. There's none of this "Ask all your friends to join" nonsense, Locus lets you use your pre-existing instant messanger accounts. AOL? ICQ? MSN? Yahoo? No problem! [via Rhizome]

Posted by jo at 09:40 AM | Comments (0)

March 14, 2005

networked objects


Objects: In-Between Wearables and Environments

Potent Objects is a series of interactive objects that questions simplistic tropes of interaction while referencing the deeper emotional states implied by the physical grammar of interactivity. The objects explore our social anxiety about machines that can feel or emote, and our complex feelings about interacting with these machines. Are our machines becoming more like us as they learn to sense more about our actions, or are we becoming more like them as we tailor our actions to their limited sensing abilities?

Through the use of double entendres and plays on words, Potent Objects examines the tropes of interactivity as metaphors for human emotion. Each object is based on a word that refers to both a physical gesture and an emotional state. Some objects will try to 'learn' about certain emotions or interactive behaviors by capturing and incorporating users' actions into the object's own repertoire of 'emotion'.

Completed objects include Shaken and Balance. Other metaphors I might explore include: "held", "hidden", "blown away", "slip of the tongue", "thrown for a loop", and "leap of faith".

Shaken, 2003
In the Shaken object, the physical gesture of shaking the device initiates the objects response of being 'shaken'. This object consists of a snow globe with an embedded LCD screen and tilt sensor. The more the user shakes the object, the more momentum is added to a video of a woman shaking out of control. Future versions of this piece will incorporate video clips which evoke a first person depiction of being shaken or disorientated.

Balance, 2003
The Balance piece consists of a 4"x6" LCD screen mounted vertically on top of a copper pole. The pole is attached to a scale-like base with long arms and sliding weights. Adjusting the weights along the arms affects the video on the LCD, causing a woman to careen off screen or to stabilize her fall. The tilt of the object in either direction corresponds to the degree the woman has fallen towards the floor. When the object is balanced, the woman comes to a stable position atop the object's fulcrum.

Posted by michelle at 02:36 AM | Comments (0)

March 09, 2005



A Psychogeographical Tour

"(In Dérive=Drift: a Psychogeographical Tour) six subjects engaged in systematic derives. (A term coined by the French Situationists that translates, 'to drift'.) The subjects went on walks around the neighborhood of SE Belmont in Portland, OR. The focus of the trip was on the journey, and as such no destination was set. The subjects recorded their observations and emotions, using the PML model as defined by socialfiction.org.

The following demonstration is a collection of their experiences. It aims to create a new map of the city, based on the collective experience of those that inhabit it."

Posted by jo at 12:33 PM | Comments (0)

March 07, 2005



Full-Fledged Multimedia Performance Environment

A new version of the critically-acclaimed network-performance Quintet.net has just been announced. Quintet.net, which was called by computer musician Ian Whalley a highlight of the 2003 ICMC, now features an elegant brushed-metal graphical user interface and many improvements under the hood, such as network jitter compensation.

With its Viewer add-on, the application is a full-fledged multimedia performance environment with real-time notation, microtonal playback capabilities and a suite of authoring tools, the Composition Development Kit. Quintet.net is based on Max/MSP/Jitter and is available for free for both the Macintosh OS X and Windows platforms.

The Hamburg Network Composers' Collective, founded in 2003, is a permanent ensemble for the performance of compositions written or transcribed for Quintet.net.

Quintet.net was featured in the 2004/05 issue on network music of the Neue Zeitschrift für Musik (with CD-ROM) and in the current, February 2005 issue of Leonardo Journal.

More information is provided at http://www.quintet.net (the site was just updated and features background information on several international Quintet.net projects as well as network music performance in general).

Posted by jo at 08:00 AM | Comments (0)

February 28, 2005

Ad Hoc Information Spaces

Temporary Networks

"Abstract: We describe the concept of ad hoc information spaces as a way of distributing information in an environment depending on user mobility and relative location. Ad hoc information spaces are realized using a decentralized approach to ubiquitous computing, which is based on functionally self-contained devices and ad hoc networking. Users are able to contruct and manipulate the properties of these information spaces by means of moving and manipulating a variety of devices. In order to explore the possibilities of using ad hoc information spaces to support group collaboration, three prototypes were developed: the Hummingbirds, the Generalized Hummingbirds and the NewsPilot. These are described along with some of the empirical findings that support their design." Ad Hoc Information Spaces - Johan Redström, Lars Erik Holmquist, Per Dahlberg and Peter Ljungstrand. [via]

Posted by jo at 10:29 AM | Comments (0)

February 23, 2005

Seine hohle Form


Collaborating on Interactive Performance Works

Abstract: Composers and choreographers face unique and largely unexplored problems as they collaborate on interactive performance works. Not the least of these problems is settling on schemes for mapping the various parameters of human movement to those possible in the world of sound. The authors' collaborative piece, Seine hohle Form, is used as a case study in the development of effective mapping strategies, focusing on dance gesture to real-time music synthesis. Perceptual correlation of these mapping strategies is stressed, albeit through varying levels of abstraction. Read Seine hohle Form: Artistic Collaboration in an Interactive Dance and Music Performance Environment by Joseph Butch Rovan, Robert Wechsler and Frieder Weiß, Crossings: Electronic Journal of Art and Technology, Issue 1.2

Posted by jo at 03:38 PM | Comments (0)

February 14, 2005

Troika Ranch


Three by Troika

In Plane (1994) was a duet for a dancer and her video image representation. The dance used the MidiDancer system to allow the performer to control the generation of music, the recall of video images, the theatrical lighting and a the movements of a robotically controlled video projector.

The Electronic Disturbance (1996): Searching for the corporeal within the virtual: a "tri-coastal" dance theater performance. Inspired by the book of the same name by The Critical Art Ensemble, The Electronic Disturbance shows the human body in flux: a body whose contact with other bodies comes more and more often not physically, but electronically. The piece explores the liberating aspects of this "electronic" body, like freedom from time or gravity, as well as the more sinister implications of a body that might be easily manipulated by external forces.

Surfacing (2004) combines dance and media to explore how tiny apocalypses and the grandly commonplace create imaginary surfaces through which we may either emerge or remain held captive. Troika Ranch's Artistic Directors, choreographer Dawn Stoppiello and composer/media artist Mark Coniglio, have created a world where real-time movement is captured by onstage cameras and subsequently trapped inside the surface of the video screen. The interplay of the real and the virtual asks the question, are you on the inside or the outside, and do you have the courage to break through?

Posted by jo at 02:56 PM | Comments (0)

The Resonant Image


Graphic Scores for Performance

Graphic scores are images created to convey instructions for the performance of sound or music without the inclusion of standard musical notation. The Resonant Image, a new two-stage exhibition in the Stasis_Space gallery, will explore the creation and interpretation of graphic scores by contemporary artists. In Stage 01 of the exhibition, a series of new graphic scores by a variety of artists will be displayed for a four month period. During those four months, another set of artists will be asked to create sonic interpretations of these scores. Stage 02 of the exhibition will present the best of these performances as downloadable mp3s.

Posted by jo at 11:48 AM | Comments (0)

February 07, 2005



First Class Closures

SuperCollider is an environment and programming language for real time audio synthesis. You can write programs to generate or process sound in real time or non real time. SuperCollider can be controlled by MIDI, the mouse, Wacom graphics tablet, and over a network via Open Sound Control.

SuperCollider is mostly like Smalltalk but has a different syntax. Everything is a first class object including integers, floats, chars. It uses Smalltalk style dynamic binding of methods. SuperCollider has very flexible argument passing: positional arguments, any-order keyword arguments, variable length argument lists, default argument values. SuperCollider has first class closures like Scheme and co-routines.

Posted by jo at 12:49 PM | Comments (0)

February 01, 2005

Geodetic Landlines


Dynamic Multi-User GPS Drawing Tool for Participatory Exhibitions

Geodetic Landlines--by Jen Southern, Jen Hamilton and Jon Wetherall--is an on-line multi-user collaborative drawing tool for GPS enabled mobile phones, in which users draw by moving in real space. For use in exhibitions in which the audience participates and contributes to the evolving installation, and for collaborative spatial practices between distant locations. It is both a creative and an exhibition tool for multi-user art in which the artist collaborates with a wider social group, addressing some of the problematics of working with process based social practice.

The phone application connects with a separate Bluetooth GPS device and draws a line representing this data to the phone screen, during a journey. This information is then dynamically relayed to a server via WAP. Information can also be requested from the server by a phone using the system. If the user accesses live information from a different city or country they can make collaborative drawings or navigate via the routes and decisions made by a distant audience, allowing them to see their city a new, and be connected to the spatial experience of another place and another individual.

These collaborative gps drawings will be viewed live on a website, or projected into a gallery space, where user choice or live xml feeds of weather, current events or pollution data can be used to modify line colour, thickness and opacity.

Posted by jo at 11:59 AM | Comments (0)

January 31, 2005



Capturing and Archiving Personal Artefacts

RAW--a project of the Human Connectedness group of the now defunct MediaLab Europe-- is a set of tools and processes for capturing, in an unconventional way, everyday subjective experience of a place, a culture, a people. RAW is named for the raw data gathered that purposefully remain unedited throughout the processes. The combined characteristics of RAW (including an emphasis on context of use, non-edition, and data presentation) make this concept a novel approach to authorship, to cultural exchanges, to audiovisual language, and to documentation.

At its core, RAW is an audiovisual recording device that combines a digital camera and audio recorder. Taking a picture triggers the recording of the sound a minute before and a minute after it. The relationship created between sound and image forms a disjoint flow and opens a new field of audiovisual expression. These previously uncaptured moments in time can be kept as personal artefacts and archived for human studies. The collected, unedited data will also be explored in a daring way within public interactive installations.

While RAW can address different intentions, its primary application is to be at the disposal of individuals in a series of social and cultural contexts. Specifically, its use in the African country of Mali has driven the research and development of the project.

Posted by jo at 12:20 PM | Comments (0)

January 11, 2005

Code Zebra


New Artistic Practices Meets Performance Art, Art and Science

Code Zebra is a highly interactive interdisciplinary, performance and software system where art meets science. The performances occur at sites around the world and on the web. CZ induces dialogues and debates between science (with an interest in computer and biological science) and arts (including visual art, design, fashion, architecture). Code Zebra consists of fictional flirtation sessions between an artist and a scientist, actual conversations and debates between the arts and sciences. Code Zebra is built as a scaleable performance series that allow venues to plug and play.

Elements of the development of Code Zebra will be captured in video and Internet streams and form part of a significant archive that will be used in performing the work. Users on the site will experience live events via streamed audio, and at times video. Live events will be publicized through the web site. The project draws from performance theory (art, literature, sociology), discourse theory and grammatology, cross-disciplinary research methods, visualization theory and computer science practice, simulation, new media analysis.

The metaphor of "zebra" is at the core of the project because zebra's stripes are reaction/diffusion patterns; a provocative but resolution based way of describing dialogue. Evolutionary theory uses zebras as a case study, and reaction/diffusion has been at the core of this emerging performance series and at the heart of the operations of computer code. Zebras are the unexpected evolutionary protocol. They stand out in a crowd, but induce nausea in lions, their former predators. Above all zebras survive by moving in camouflaged herds.

Performances will take place in various real and simulated locations. The software of Code Zebra also combines panel discussions and debates, live and on-line, chat, simulated conversations and patterns and visualizations of chat or performance art metaphors using software and fictional performances.

CodeZebra OS (Orifice System) Software

In November and December of 2000, a group of leading artist/software developers, streamed media artists, discourse theorists, chat analysts, computer scientists and programmers gathered in San Francisco, led by Sara Diamond and hosted by the ArtsAlliance. Together, they modeled Code Zebra, a software that will analyze and allow people to author on-line chat, video streams, producing visual patterns that allow users to better understand and symbolize their own position within discussions. The software deploys reaction/diffusion patterns from nature, but permits individuals or groups to have a personalized pattern at any point in time. The design retreat includes some of theworld`s leading thinkers in Internet dialogue analysis, streamed media, and pattern creation. It links to the larger Code Zebra project. The software is capable of analyzing all forms of chat, but will be focused on debates and discussions in art and science as a development phase.

The software provides users with the ability to use visualization patterns to locate themselves within an Internet discussion, review their histories of dialogue, enter a deeply moderated or anarchic space that is designated by topic and/or by mode of chat and feel physically located in this space; monitor other chats simultaneously, use physical links to relate ideas; build a personal pattern icon and a personal tale of chats and Internet dialogues that unfolds as a visually beautiful, navigable, shareable 3D and sound movie.

Fear and self-preservation need to be confronted and transcended in the process of enacting cross-disciplinary exchange. This will be so in the performances and is already a part of the software design process. The performances and the software translate constantly, between the semantic meanings of concepts, people's relationships, into visualization new levels of understanding. The software enables agency (you choose topics and people), but its intelligence constantly suggests new possibilities of idea, related concepts and people for you to connect to. While appearing light, beautiful and playful, this project is in fact serious and viral.

The software development workshop may be a good example of the lock-up technique (referred to later), as the artist placed fifteen top-end researchers and coders in a room with her and several other artists. They came up with the chat visualization software that was a complex melding of computational linguistics, social instincts, and simulations of animal and human physical characteristics and evolutionary logic. First and foremost, they combined an emotional system with the anarchic or unconscious dynamics of Internet interaction. Code Zebra is all about process, allowing a series of lenses on the process of dialogue and creativity. While driven by the vision of one artist, it is highly collaborative and hence high risk, requiring the cooperation of different talents and disciplines for its success.

The user enters each chat or dialogue session by dwelling on the surface of the site and then diving into a familiar or seductive pattern. Once inthe pattern one moves through its moving mass and can stay at any point for dialogue. The software monitors styles of discussion through pattern analysis and can impose patterns on certain kinds of dialogues should these go astray or at least suggest these. Patterns can be used to analyses what people are talking to each other, about what, but most profoundly how they are talking. Character scripts based on the reaction diffusion character team (Code Zebra, Os Zealot, etc.) will arrive in at least text, if not visual form, to moderate discussions or suggest changes of mode (e.g. you have a limit of ten words posting for e.g. and it looks like playful leopard spots). This process of swimming through the topography of the site is called orifice systems (OS).

Each user creates their icon on the site, a moniker for HOW they interact. Each chat sessions produces a pattern that they can capture at any point in time. It forms a ring, or layer attached to their icon. Over time, these layers build to create a personal tale. These are three-dimensional forms that the user can enter, fly through, fly around, pull out layersof for reconsideration. Although highly visual (and eventually sonic), drilling down into these results in precise data base information about whoone has spoken to, about what and in what style (aggressive debate, playful banter, formal panel discussion) at what time. The forms operate as 3D movies, luscious and beckoning. The icons sit on the surface of the site, but at any time, users can share these or revisit these alone or accompanied.

Posted by jo at 06:55 PM | Comments (0)

January 05, 2005



Website Traffic Statistics Visualization Software

VisitorVille is software that takes a new visual approach to web analytics; instead of representing website visitors simply as numbers or graphs, it displays them as real people in a real environment. You can watch your site traffic as if you were people-watching in a big city. VisitorVille brings your website visitors to life as animated characters in real time. [via neural.it]

Posted by jo at 10:07 AM | Comments (0)

January 03, 2005

Real time 3D


3D tools for performance artists

Touch Tools by Derivative, Inc. is a software toolkit that allows performance artists, VJs, architects, musicians, stage designers, and others to create 3D visuals in real time.

Using Academy Award-winning technology as its' basis, Touch is infinitely customizable. Creating your work in Touch is accomplished with visual, node-based editing, and Touch Designer has capabilities to create procedural 3D models, particles, compositing, textures, and live video input.

Touch tools have already been used in high-profile projects, such as the visuals and other controls for Plastikman shows and as part of the new Prada building in Tokyo.

I used an earlier version of Touch Designer to create Immaterial, a performance of live 3D elements with pre-recorded video. (Posted by Michelle Kasprzak)

Posted by at 01:11 PM | Comments (0)



Multiple Streaming Engine

MuSE provides the free software community with a user friendly but powerful tool for network audio streaming, making life easier for indypendent free speech online radios.

MuSE is an application for the mixing, encoding, and network streaming of sound: it can mix up to 6 encoded audio bitstreams (from files or network, mp3 or ogg) plus a souncard input signal, the resulting stream can be played locally on the sound card and/or encoded at different bitrates, recorded to harddisk and/or streamed to the net. When sent to a server, the resulting audio can be listened thru the net by a vast number of players available on different operating systems.

To be operated MuSE offers graphical interfaces and a documented commandline interface in the good old unix style. MuSE is free software and it is released open source under the terms of the GNU General Public License.

Posted by jo at 08:07 AM | Comments (0)

December 30, 2004

Vidget 3.5


Experimental Performance Device

Vidget 3.5 is an experimental interactive audiovisual performance device which allows the user to manipulate video in real time online. As well as mixing a number of video clips together, the user may search for still images from the Flickr photo sharing site and mix them together. [via unmediated]

"...this version (Vidget 3.5) follows on from previous versions: Vidget 1; Flickr Image Viewer (Vidget 2); and Vidget 3...For instructions on Vidget 3.5 usage, see previous versions above.


The main feature which has been fixed is the Flickr image search function. Over the last few weeks flickr.com (a very cool photo sharing site) has been changing the way they format their RSS feeds which means previous vidgets are broken. Things seem to have settled down now after a brief period of including 'enclosure' tags in their feeds I am back to parsing information from the 'description' tags for each image.

I am also back to using separate movies for the controls and output of the vidget. This improves the performance greatly as the output movie (which composites up to 3 source movies over the top of each other in real time) is rendered at 320*240 and may be increased to 640*480 by viewing at 'double size' in the Quicktime Player. I was having a lot of problems with sending the url of each of the 'source' movies from the controller movie to the output movie to be loaded. The solution was to have the output movie effectively 'ask' for the url via a custom event.

The controller movie now finds the url of the selected source movie (from either the xml file which lists the names of my clips or the Flickr RSS search response), sets it as a 'string variable' and then executes a custom event in the output movie which reads the 'string variable' back from the controller movie. The same process applies to tell the output movie which Movie In A Movie track to load the clip into.

Here's a bit of the LiveStage Pro Qscript for the controller movie:

SetStringVariable(1, channel)

SetStringVariable(2, bignumber)
SetStringVariable(4, movpath)

MovieOfID(12345).TrackNamed("Untitled Sprite").SpriteOfID(1).ExecuteEvent(42684)

The output movie then executes an event which gets these variables and sets them as 'Movie Variables':

SetString(channel, MovieOfID(838482).GetVariable(1))
SetString(bignumber, MovieOfID(838482).GetVariable(2))
SetString(movpath, MovieOfID(838482).GetVariable(4))

TrackNamed(channel).AddChildMovie(bignumber, movpath)

Livestage Pro source files: Controller, Output"

Blogged by David Wolf

Posted by jo at 08:40 AM | Comments (0)

December 27, 2004



Emerging Infrastructures of All (Inter)net Research

Dr. Reinhold Grether's network research | netzwissenschaft site maps the "emerging infrastructures of all (inter)net research endeavours. net.science as an anthropology of connectivity is trying to overcome the constraints of specialist method transfers on net matters. the protuberance of technical networks necessitates a professionalization of human net knowledge. neither the isolation of concepts as in basic research nor the encapsulation of processes as in applied sciences will ever be able to adequately describe the complex autopoiesis of networks. net.science is undoubtedly developing into a scienza nuova of its own right."

Check out his Mobile Art and Virtual Performance research areas.

Posted by jo at 04:45 PM | Comments (0)

November 15, 2004



Capacity for Charged Performances

Capacitor* is a group of interdisciplinary movement artists that accumulate energy from our technologically impacted culture and release it through innovative, multi-sensory performances. One of their pieces, Avatars (2002), takes the characters and themes of computer games and incorporates them into their performances. (Video clips are available of this performance).

From the movement of the human diaphragm to the story of evolution; from the behavior of electricity to genetic manipulation; from the birth of the moon to the cycles of digestion - science and technology form the basis for Capacitor's study of performance. Obsessed with the mechanics of the human body as well as machines that propel the body through space, Capacitor artists have become masters of rigging systems and large-scale props designed to stretch the limits of physical poetry. [via]

*ca·pac·i·tor (kƒ pas‚i tƒr) n. a device for accumulating and holding a charge of electricity.

To cultivate new and relevant performance concepts, Capacitor developed the Capacitor Lab, a think-tank of artists engineers, scientists and philosophers who exchange ideas, share knowledge, and ignite each other's imaginations towards the creation of innovative performance pieces.

The lively dialogues and debates that take place in the Capacitor Lab bring a broader outlook to the creative process and provide a vehicle for critical feedback as new work is developed. By engendering dialogue around contemporary technological and scientific issues, Capacitor aims to find common ground amongst individuals from varied fields of inquiry and, together, shape a new performance aesthetic built on today's innovations.

Capacitor has created works that tackle the creation of the universe (Within Outer Spaces, 2001), the past and future of mankind (futurespecies, 2000), the modern phenomenon of video gaming (Avatars, 2002) and, most recently, a journey through the visible and invisible layers of the Earth (Digging in the Dark, 2004). Capacitor's uncanny and contemplative use of technology won the company an invitation to speak at the Monaco Dance Forum in 2002 and has been covered by Computer Graphics World, Wired.com, SHIFT Magazine, NBC 11's Tech NOW!, CNET Radio, TECH TV, Dance Magazine and The New York Times.

Founded in 1997, Capacitor is grateful for support from San Francisco's Grants for the Arts, California Arts Council, the Zellerbach Family Foundation, Altria Group, Inc., the LEF Foundation, Theatre Bay Area, and the American Composers Forum.

Posted by jo at 10:35 AM | Comments (0)

November 09, 2004

Messa di voce


Can't stop the fun with those walls

At this year's Ars Electronica, I tested the amazing Messa di voce by Golan Levin, Zach Lieberman, Jaap Blonk and Joan La Barbara.

A computer equipped with a video camera tracks the position of visitors’ heads and analyzes the sounds they make. As a reaction to it, various graphic compositions are projected onto a screen. With the help of a tracking system, these visualizations can be shown in a way that makes them seem to be emanating from the mouths of the people themselves. (via inspired by angermann)

More recently, I heard about two other creations that allow people to play with walls:

Greg Roberts' PlayMotion! combines computer vision algorithms and videogame graphics techniques to turn ordinary walls, floors, and ceilings into virtual playgrounds.

Thanks to Bill Read for the tip.


Bloomberg ICE space in Tokyo observes the visitors and continually computes its appearance in accordance with their behaviour. In stand-by mode, the icicle displays columns of numbers. Depending on the current share prices, they expand on the LED display or they shrink and disappear altogether.

But when a visitor approaches the glass wall, infrared sensors detect her/his presence and ask her/him to participate. The columns of numbers begin to flutter and fluctuate; a menu appears and offers various games - electronic volleyball, a wave generator or a digital harp, for example. The icicle can be used to play music synesthetically in that the sounds played are replicated on it in the form of brightly lit, colorful tree branches.

Posted by jo at 01:08 PM | Comments (0)

October 27, 2004



Augmented Reality

ArtFutura's theme this year is Augmented Reality. It's taking place now in Barcelona [October 28th-31st]. The programme includes Howard "Smartmobs" Rheingold, Blast Theory who will perform Can You See Me Now?, the SimpleTEXT performance, Dublin’s MediaLab Europe and Montreal’s SAT will be showcasing installations and developing experimental projects, Richard Marks, creator of EYETOY, Greyworld, Fiona Raby, etc. (via)


Saturday October 30, 2004 - 4PM - 8PM, Live from Barcelona, SAT presents Phéromones, the Art Futura's closing show. Phéromones is a network multichannel audiovisual techno performance regrouping musicians Alain Thibault, Physical Noise Theatre & Champion and VJ jocool , Ladyroll & Le couple in Barcelona as well as Yan Breuleux, Nuclear Ramjet & Johnny Ranger in Montreal.

Pheromones will connect by means of an optical fiber network of Internet 2 (providing data flows of up to 2 Gigabytes per second) the Mercat de les Flors at Barcelona and the head office of the SAT at Montreal in order to share music and images generated on both sides of the world, by means of an experimental technical support: eight audio channels and immersive multi-screens. R&D for clubbing culture.

From October 26 to 30, 2004
A team of 17 artists, researchers and technicians, as members of the Territoires ouverts - TOT - Open Territories consortium initiated by the SAT, are invited by the Art Futura festival to present their work in Barcelona.

Several of the tools developed at SAT, like the nSLAM multichannel audio application and pixelTANGO digital video mixer, will be put at contribution in order to show the know-how and the expertise of the developpers in the fields of IP streaming video and mulitichannel audio, IP telepresence (point-to-point and multi-point) and transmission of immersive audio-visual environments over IP networks.

The Open Territories project aims to foster the emergence of innovative forms of cultural expression by engaging the creativity of a new generation of artists and creators, in whose hands will be placed a host of advanced interface and networking technologies. The TOT project will be able to open the broadband network environment to talented creators who are ideally placed to illustrate broadband's potential as a means of avant-garde cultural expression.

Thursday October 28, 2004 - 10AM - 5PM
Friday October 29, 2004 - 10AM - 5PM
Saturday October 30, 2004 - 10AM - 4PM
SAT invites you and your friends to "cross the Atlantic" and project yourself at the beautiful Mercat de les Flors in Barcelona, site of the Art Futura Festival.

During the three days of the festival, using two Telepresence stations, people at both side of the Atlantic will be able to meet and speak as if they were sitting on the same table, with high audio and video resolution.

To participate and meet the catalan artistic community, Art Futura's visitors and the SAT team in Barcelone, come at the SAT Café each day between 10AM and 5PM (4PM Saturday).

Free entrance

Posted by jo at 02:00 PM | Comments (0)

October 20, 2004

A map larger than the territory


Mapping Flow, Crossed Paths

A Map Larger Than the Territory is a Web application that enables participants to represent their paths across the city using images, texts and sounds. Territory here is not a piece of land enclosed within borders but an interlocking network of lines or ways through. The map materialises and connects individual trajectories.

How does it work? Choose a city and a language. The map shows other people's paths in that city. A button at the right sends you to a blind map where you can add an itinerary of your own. To do so, you must first give it a name, a date and a color. Use the tools provided to locate places on the map and define points on your path. Each time you mark a location, a dialog box opens up for you to identify and describe it. When you have finished marking up your path, you can view the itinerary you have made.

Posted by jo at 10:42 AM

October 11, 2004



A Networked Instrument

Auracle, a voice-controlled, networked instrument on the Internet, was created over the past year at Akademie Schloss Solitude in Germany by the Auracle team (Max Neuhaus, Phil Burk, Sekhar Ramakrishnan, Kristjan Varnik, David Birchfield, and Jason Freeman--read PDF). The project is inspired by Max Neuhaus' interactive radio works from the 1960s and 1970s.

The site is now live, and you can try out Auracle at any time (you'll need a computer with a microphone). The official launch event is Friday, October 15th at Donnaueschinger Musiktage, a new music festival in Southern Germany. You are welcome to join in via the Internet by using Auracle at 2:30 p.m. Eastern Standard Time this Friday; see the web site for details.

Auracle is a networked sound instrument, controlled by the voice. It is played and heard over the Internet. To participate, simply launch Auracle in your web browser, join an ensemble, and create sounds with other participants in real time. While it takes years to learn how to control an instrument such as guitar or piano, Auracle requires no special training to play. It takes advantage of the sophisticated and flexible vocal control we all have from our ability to speak.

Your voice, however, is not the source of Auracle's sounds — it is merely a way of controlling those sounds. You play Auracle with other people anywhere in the world in groups of up to five players called Ensembles. All members of an ensemble are able to hear each other's gestures. Listen to the active ensembles in Auracle by clicking on their names. Stay with a group that you like, or create a new ensemble yourself and invite others to come and play.

Auracle is an instrument, not a piece; it is a system, not a musical composition. It has an internal structure, but it does not define or control the interaction of its participants. The architecture itself adapts over time in response to how it is used.

Posted by jo at 02:58 PM | Comments (0)

Human Ethernet


Streaming the Body

NTT demoed their human body data transmission technology at CEATEC JAPAN 2004. The technology detects weak electric fields in a human body by using an optical electric field sensor and provides the data communication speed of about 10Mbps (much faster than 3.7Kbps, the speed Matsushita Electric Works' technology enables.) The system also uses the TCP/IP protocol, making a human body equivalent to 10Mbps ethernet. However, unlike Matsushita's technology that is ready for commercial use, NTT's technology may need some work before it can be commercialized.

It works as follows: You step on a copper platform containing a server computer and hold a computing device, then data are sent from the server through your body to the computing device. For example, you can view streaming video transmitted through you body on your handheld device.

Via RFID in Japan < IT Media. Reblogged by Regine from we make money not art

Posted by jo at 11:06 AM | Comments (0)

October 08, 2004

Moon Radio


Mooning You 24-7

Moon Radio is now previewing their MyTV prototype, a programme of live, broadcast events that provide an opportunity for artists, producers and the Moon Radio community to plug-in a camera, choose a time and broadcast live to a worldwide audience 24-7.

MyTV will be launched online later in 2004. Until then, these performances will be archived and available for viewing at Moon Radio. Moon Radio webTV is a web streaming channel hosting live broadcasts, an archive of diverse films, documented live events, and an active online community of artists, filmmakers, content producers and regular viewers. It began in 2000 as an audio and video web streaming channel for artists to explore live web streaming technology. The project commissioned artists and hosted live events both online and in arts venues around the UK. Moon Radio webTV has developed to focus on building tools for the Moon Radio community. These include hosted forums, profiles of community members, and text messaging services.

Posted by jo at 09:10 AM | Comments (0)

October 06, 2004

Organised Sound: An International Journal of Music and Technology


Networked Music

Interconnection has always been a fundamental principle of music, prompting experimental artists to explore the implications of linking their computers together long before the Internet reached the public consciousness. As the Internet achieved critical mass over the past decade, networking technology took centre stage as the key to a vast new territory of possibility, facilitating remote participation, distributed processing, and redefinition of musical space and time. The Web emerged as a virtual venue for countless musical purposes, and as analog acoustics transformed to digital representations, packets of data carried by IP from one address to another became a modern metaphor for air molecules transmitting the tone of vibrating body to eardrum.

As with any new technology, applications of networking to music have evolved from naïve proofs-of-concept to more sophisticated projects, and we stand now at a point when 'internetworking' is taken for granted, novelty is expiring and artistic goals more often transcend technical considerations. From this vantage, the essential question is not how networking and music are combined, but why. What is the unique experience that can be created? Whose role can be empowered or transformed: composer, performer, audience? Where can sound come alive that it couldn't otherwise? Networked music can reinterpret traditional perspectives on stagecraft, ensemble, improvisation, instrumentation, and collaboration, or enable otherwise impractical relationships between controllers, sensors, processors, inputs, and outputs. The network can be an interface, a medium, an amplifier, a microphone, a mirror, a conduit, a cloud, or a heartbeat.

The network is all of us. Music is the sound we make. Listen...

Call for Articles and Works

Volume 10, Number 3
Issue thematic title: Networked Music
Date of Publication: December 2005
Publishers: Cambridge University Press

We invite submissions from composers, performers, artists and researchers working in the realm of digital media and sound. Submissions related to the theme are encouraged; however, those that fall outside the scope of this theme are always welcome.

Issue Co-ordinators: Margaret Schedel [gem at schedel.net] and John P. Young [sound at netmuse.org]. This issue is being prepared in collaboration with the International Computer Music Association (ICMA).

The theme represents many avenues for discussion including, but not limited to:

Networked control interfaces (hardware/software)
Sensor arrays/interaction
Distributed/remote participation (composition, performance, reception)
Broadcasting/multicasting/streaming media
Virtual musical environments/venues
Aesthetics/philosophy of musical interconnection
Web-based music projects
Distance learning/education
Online collaboration
Networked data sonification
Real-time remote sensing
Distributed processing
Networking for fault tolerance
Musical avatars/agents/bots
Emergent network phenomena/effects/behavior
Neural networks
Alternative musical networks (RF, MIDI, WiFi, Bluetooth, etc.)
Strategies for mitigating network limitations (bandwidth, latency, etc.)

This issue continues the annual partnership between Organised Sound and the International Computer Music Association, with previous themes including "Performing with Technology" and "Collaboration and Intermedia." In exploring these prior areas, networking has emerged as a common element underlying a wide variety of innovative projects, prompting a more focused look at the mutual influence between networks and music. This should be no surprise in the electroacoustic field, where our machines are partners as much as tools, and working with other artists or often even solo requires connection between multiple machines. In the pre-network era, technical obstacles frequently dictated that much computer music occurred in relative isolation, with musicians expending precious attention acting as interpreters between hardware and other humans. So in one sense, networked music can be simply a recapitulation of acoustic music principles, of listening and sensitivity to other performers and audience, by enabling computers to participate equally in the musical conversation. Networking can also radically alter these traditional principles, most obviously by decoupling the spatial framework, enabling some or all of the participants to act and perceive without being physically present. Thus networked music is fertile territory for the composers, performers, and researchers that comprise the ICMA as both a potential means of overcoming challenging limitations of technology, as well as presenting new possibilities we have yet to imagine.

Submissions may consist of papers, with optional supporting short compositions or excerpts, audio-visual documentation of performances and/or other aspects related to your submission. Supporting audio and audio-visual material will be presented as art of the journal's annual DVD-ROM which will appear with issue 10/3. Related discussion will be located on the ICMA Array website, and additional multimedia at Organised Sound's Cambridge University Press website.



Notes for Contributors and further details can be obtained from the inside back cover of published issues of Organised Sound or from:


Email submissions should be sent to (please see SUBMISSION FORMAT above): os at dmu.ac.uk

Hard copy of articles (only when requested) and other material (e.g., images, sound and audio-visual files, etc.) should be submitted to:

Prof. Leigh Landy
Organised Sound
Clephan Building
De Montfort University
Leicester LE1 9BH, UK.

Editor: Leigh Landy
Associate Editors: Ross Kirk and Richard Orton
Regional Editors: Joel Chadabe, Kenneth Fields, Eduardo Miranda, Jøran
Rudi, Barry Truax, Ian Whalley, David Worrall
ICMA Representative: Mary Simoni
International Editorial Board: Marc Battier, Laurant Bayle, Hannah Bosma, Allesandro Cipriani, Simon Emmerson, Rajmil Fischman, David Howard, Rosemary Mountain, Tony Myatt, Jean-Claude Risset, Francis Rumsey

Posted by jo at 09:27 AM | Comments (0)

September 30, 2004



BiReality: Mutually Immersive Mobile Telepresence

BiReality uses a teleoperated robotic surrogate to visit remote locations as a substitute for physical travel. The goal is to create, both for the user and the people at the remote location, the sensory experience relevant for face-to-face interactions. The second-generation system provides a 360-degree surround immersive audio and visual experience for both the user and remote participants, and streams eight high-quality video streams totaling almost 20Mb/s over wireless networking. The system preserves gaze and eye contact, presents local and remote participants to each other at life size, and preserves the head height of the user at the remote location.

Posted by jo at 09:31 AM | Comments (0)

September 29, 2004

The Invisible Train


The Invisible Train is a mobile, collaborative multi-user Augmented Reality (AR) game, in which players control virtual trains on a real wooden miniature railroad track. These virtual trains are only visible to players through their PDA's video see-through display as they don't exist in the physical world. This type of user interface is commonly called the "magic lens metaphor".

Players can interact with the game environment by operating track switches and adjusting the speed of their virtual trains. The current state of the game is synchronized between all participants via wireless networking. The common goal of the game is to prevent the virtual trains from colliding. Watch the Quicktime video. Invisible Train will be demonstrated at the International Symposium on Mixed and Augmented Reality (ISMAR 2004) in Arlington, VA.

Originally from waxy. Reblogged by tripp USC Interactive Media Division blog, and by emily on near near future.

Posted by jo at 12:59 PM | Comments (0)

September 14, 2004



UpStaging Performance

A new medium for online performance, theatre and storytelling is now in its first release. UpStage is a web-based venue and tool for artists to compile different digital media for textual and audiovisual communication into a live performance, in real time, for online audiences.

The first release of the software was launched on 9 January 2004, and online walk-throughs were held on in February to give people an idea of how UpStage works from the player's perspective. These sessions will be continued on a regular basis, lead by the members of Avatar Body Collision. If you are interested in having a hands-on experience with the software, and participating in live improv sessions, email colliders at upstage.org.nz for further information and to be notified of times. Visit the UpStage foyer, from where you can access a sample stage.

Currently, UpStage is being used for WorldX, a virtual exchange between schools in the UK and New Zealand, and DTN2, the first cyberformance using UpStage, was performed live from the Machinista Festival in Glasgow on Sunday 9 May.

Posted by jo at 06:09 PM | Comments (0)

September 09, 2004

tanz performance köln


European Complements

tanz performance köln founded in 1989, develops concepts and realizes dance-related series, festivals and dance film programs, that are representative of new developments in contemporary dance both nationally and internationally including projects with a strong focus on new media technologies. Their projects include DAMPF_lab, an interdisciplinary collaborative project fostering new and distinctive art works of high quality that integrate interactive media technologies with the performing arts while seeking insight into questions such as: how do emerging software practices impact performance; what is the experience of the performer on an interactive stage; how are the relations between maker, performer and audience still shifting; and what are the aesthetic/social consequences?

tanz performance köln partners include:

Animax Multimedia Theater (est. 1999) of the Bonn Development Workshop for Computer Media (est. 1985) focuses on the roduction/presentation of media arts and provides an outstanding operational basis for the exploration of new interactive technologies in the area of performance arts and immersive arts.

Ars Electronica Center/Futurelab offers one of the best-equipped and most efficient research and development workshops for Interactive Spaces, Virtual Environment and Digital Surfaces in the context of Ars Electronica which has been an internationally unique showcase for digital art and media culture and encounters between the arts and sciences since 1979.

V2_Lab is the aRt&D department of V2_, Institute for the Unstable media (est. 1987) that concerns itself with research and development in the field of art and media technology and brings to the project a variety of knowledge/ expertise derived from an interdisciplinary approach to the field of unstable media including specific critical perspectives.

Partners work on the following complementary strands:

research labs aiming to generate shareable outcomes by drawing questions from and in turn supporting the creative work of the co-productions; stimulating other new artistic work; innovating technology solutions together; and devising unique dramaturgical and user testing approaches to the artistic process. In addition, the labs will gather documentation to contribute to theoretical and educational approaches to this area of work.

co-productions aiming to nurture and help realise two specific art works: composer/director Klaus Obermaier working with the Ars Electronica Futurelab to develop interactive media technologies for use in a new performance for the stage; and choreographer Angelika Oei working with the V2_Lab on the creation of a large scale performance/ interactive media installation work.

Associate researchers:
Matt Adams (Brighton/ London)
Daniel Aschwanden (Vienna)
Johannes Birringer (Nottingham)
Carol Brown (London), Nuria Font (Barcelona)
Nik Haffner (Frankfurt/ Barcelona)
Prue Lang (Frankfurt), Cindy Lee (London)
Martin Kusch (Vienna/ Montreal)
Christopher Salter (Berlin)
Stephanie Thiersch (Cologne)
Ariella Vidach (Milan)

Posted by jo at 01:09 PM | Comments (1)

August 14, 2004

Topological Media Lab


Responsive Media/Expressive Instruments

The Topological Media Lab provides a locus for studying gesture and materials from phenomenological, social and computational perspectives. TML research invents responsive media and expressive instruments that support novel technologies of performance and the architecture of hybrid media spaces. The products of the laboratory are (1) scholarly presentations, (2) media artifacts and performances as pieces of cultural experiment, (3) opportunities for students of design to sharpen critical faculties in project-based work.

Current application domains include: realtime video and sound synthesis, sensors, physical computing, computer-mediated human interaction, media choreography, active fabric and wearable architecture. Topological media is physical and computational matter, image or sound fashioned as substances evolving under continuous action.

The TML draws insights from studies of embodiment and materiality, performance and music, as well as dynamical systems and differential geometry and topology. Its projects also serve as case studies in the construction of fresh modes of cultural knowledge and techno-scientific practice.

Posted by jo at 06:34 PM | Comments (0)

In this interesting moment

From: Autonomous Zone: The Work-as-Art of Yury Gitman by Douglas Rushkoff, Wed Aug 11 09:00:00 GMT 2004

"...There is this interesting moment with emerging technology where it's malleable, where there is still room for play. It's a moment when our imaginations sprouts wings and our actions and choices can have lasting effects. We are in this malleable period in regards to Wi-Fi and mobile technology. Individuals, small research labs, university students and wireless community organizations can all still have an effect on the formation and development of wireless Internet networks and applications. They can have an effect on something that will solidify soon and possibly start redefining our lifestyles -- that's fairly exhilarating. That's the "high" early adopters pay for. That's the high that keeps community wireless organizations pulsing."-- Yury Gitman 

Posted by newradio at 11:58 AM | Comments (0)

Wireless Phones in Japan



Life with Ketai: the Culture of the Japanese "Almighty" Wireless Phone (Ketai)--Nearly 70% of Japanese use wireless, and more than 70% of these phones come with a camera. The Internet, video, TV, GPS, avigation system, avatar chat, games, karaoke... almost any service available on the Internet and all sorts of features related to visual entertainment have moved onto Ketai. Ketai is not just a mobile form of telephone but it is a new form of culture. Ketai cameras have changed people’s way of dealing with here-and-now.

Read Jason Seegert on the history and adoption of ketai in Japan -- and how Japanese use differs from American. Also check Wired Magazine for the hottest private eye in Japan and how she fights crime with her cell phone.

Posted by newradio at 11:39 AM | Comments (0)

August 03, 2004

Painting the Street


Bikes Against Bush

From "Bike Writer Pedals for Protests" by Leander Kahney, Wired News, August 2, 2004

New Yorker Joshua Kinberg is a bike messenger of a different stripe. Instead of ferrying legal papers between lawyers, he uses a homemade, wireless, bicycle-mounted dot-matrix printer to spray protest messages in the street...[Bikes Against Bush is] "painting on the street, but on the Net, too," said Kinberg, a post-graduate student at Parsons School of Design.

Continue reading at Wired.com

Posted by jo at 10:37 AM | Comments (1)