May 30, 2007
Park View Hotel
Hotels 'Bleed' into the Neighborhood
Park View Hotel-- by Ashok Sukumaran--stretches between the Cesar Chavez plaza in downtown San Jose and the neighbouring Fairmont Hotel. Using specially-built pointing devices, audiences in the park can access interior hotel spaces, by "pinging" them optically. Once found and hit (two different modes on the scope) the interiors release their properties into a wireless network... the color of the interior propagates stochastically, leaking out of the building skin, jumping across the street, and entering some street-lights in the park below. In this way, the park enjoys a certain neighbourly access to the hotel, inverting the usual character of the relationship.
This project was the result of a residency at Sun Microsystems Labs, where the artist was (according the residency brief) working with SunSPOTs, small "programmable object technologies" which are a simple-to-use prototyping platform for embedded technologies, or the so-called "Internet of Things". [via]
March 13, 2007
A little bit on digital marks, I selected a variety of them.
The semacode, a two dimensional code that encodes a URL. The picture (left) is the semacode of architectradure. Thank you Michael Surtees for the link! This tag embed the URL address of my blog, that can be read by your cell phone and send you to its page. I guess it avoids typing in the URL and you can rapidly go through a series of web sites using the respective tags.
It is especially useful for combining physical space to digital content. The Semacode's Software Development Kit has is developed for ubiquitous computing by creating visual tags for objects and contexts, and read them using a mobile camera phone. The physical Wikipedia called Semapedia, created by Alexis Rondeau and Stan Wiechers, allows you to add place tags on places and things to link them to the relevant Wikipedia articles.
[images: trash can with a wikipedia tag] Semacode technical paper.
Urban Tapestries allows public mapping and sharing by combining mobile and internet technologies with geographic information systems. This system was linked to Natalie Jeremijenko's famous feral robots -open source robots for investigating contaminated urban sites- and called Robotic Feral Public Authoring: "Adding the sensor readings to online mapping tools, such as Urban Tapestries, suddenly brings the relationships between environment and home vividly to life. It enables people to feel they can learn about their environment and have the evidence to do something about it"
Yellow Arrow allows a community to tag places using arrows. You can post a message using the arrow and anyone could retrieve it using their cell phone. Another method to link digital content to a physical place. The community of yellow arrow is quite big. Their blog.
Elens allows anyone to create talking landmarks. Developed by the MIT Media Lab it allows anyone to tag a place by adding a sticker on a physical location, sticker that can later be scanned by a cell phone, in this case the Motorola A1000.
M-views developed at the MIT media lab in the interactive cinema group -media fabrics- with Glorianna Davenport, explores the "ideas, methods, and culture of mobile cinema, which is experienced in temporal and spatial narrative segments that can be delivered on context-aware mobile devices."
In 2002, I researched with Glorianna Davenport on technologies to allow digital information to communicate with the physical space. I worked on Passing Glances a system that enables users to create ambient urban interludes through the use of SMS text messages. Associated graphics and storytelling were projected in the urban space.
Enarrative5 2003 paper
With these tags, the physical space is tagged to the digital space. One can think the other way around and tag the virtual space with physical content. That is what Josh Lifton told me he was working on the other day. Josh created a plug sensor/actuator network, called the dual reality lab, that links the MIT Media Laboratory space to a virtual lab space in the Second Life online virtual world.
["Dual reality" is the concept of maintaining two worlds, one virtual and one real, that reflect, influence, and merge into each other by means of deeply embedded sensor/actuator networks. Both the real and virtual components of a dual reality are complete unto themselves, but are enriched by their mutual interaction. The dual reality Media Lab is an example of such a dual reality, as enabled the Plug sensor/actuator network that links our actual lab space to a virtual lab space in the Second Life online virtual world.]
February 06, 2007
Takashi Matsumoto on
Ubiquitous Content + Pileus: The Internet Umbrella
"[...] "Ubiquitous Content" is an idea of a new design objective of our lives in the post-PC era. In 20th century, a notion of media contents has been meant contents like movies, music, animations, video games etc. Figuratively speaking, such contents were entities supplied in containers designed as "boxes". But now, a spread of networks and a realization of ubiquitous computing technologies are going to change those styles of media. The container is not like a "box" any more: It will change its forms freely to give us advanced computer augmentations in a specific context and it will be sometimes invisible embedded into our environments. It is more appropriately called Ubiquitous Media and it will be a new style of media. When we design such Ubiquitous Media, we need to think about the container as our environments in which many things are cooperating rather than a single hardware, a single software or a single standard. Users will not need to be conscious of those medias, therefore such containers emerge for users as "their lives" themselves. "Ubiquitous Contents" are contents for such media. Those must be "experiences" in "their lives".
As Ubiquitous Content project focuses on our lives and experiences, all things in our everyday lives are targets of the design. The 10 Laboratories of KMD are working on this wide subject from different perspectives....
Pileus is the most exciting project for me right now. This work is designed in a team with Sho Hashimoto, who has a unique engineering skill in the lab. We started this project in a kick-off camp of a spring semester in 2006. the initial concept and the first scenario movie were completed in just 3 days of the camp.
We have many rain in Japan. So the umbrella is one of the closest article of everyday use, but it is also a bulky article in such a climate. Traditionally we have been feeling many kinds of air and mood in a rainy day, and we wanted to expand that feeling to be more fun and vivid with the re-design of an umbrella. From that perspective, we came up with the idea of umbrella to take photo-logs and to browse internet contents in a rain. Me and Sho already took notice of that we can provide many kinds of services in a real world with Web2.0, and also had a technological vista to mash-up those with a mobile hardware. Additionally, it was another target that this can be the first example of a hardware mash-up to indicate a new economic solution for mobile gadgets joining into an economy of Web 2.0. We do not want a small "Cellphones" (Smartphones, whatever) squashing up many functions inside, but we re-designed an object of everyday use from scratch to be mashed-up with web services.
At the end of last year, we founded a spined-off LLC for the project, and we are thinking how it will go a business exit.
As the ideology of the design of Pileus, we would like to show that design is not about its shape any more; an apt assortment of modules and interactions are more important factors for the design. So, our prototype is showing off the circuits to see how modules are combined rather than covering it. Some people suggest us to give a beautiful surfaces for it as a "design", but that is not what we want to do now, we are meticulous about the interaction of information visualizations on the screen though. Fortunately, this rugged look is loved by many audiences at demo sites.
As an exclusive info, we have builded a new version of Pileus with GPS. A new function with GPS is geo-tagging of photos taken by Pileus. It will help to users to check and share records of their walks in the rain. Another function is a map display of an area. This will be used for a big-screen navigation in an umbrella, and it will be able to show local pictures and local ads are loaded on the umbrella. Of course, this function is also realized by a mash-up technique. Now we are using Yahoo! Maps API, but we may switch it to Google Maps API because Japanese map on Yahoo! maps has bad scale ratio. We are going to go an experiment in a city in a rain, however, unfortunately we have had few rainy days this year yet... [ from Regine's interview on we-make-money-not-art]
January 27, 2007
Networked Furniture and More with Tobi Schneidler
"As the Internet permeates our lives, our future may include networked devices in the home and workplace that provide global connectivity. Closely examining this concept is German architect and designer, Tobi Schneidler. Schneidler's work explores the seemingly limitless potential of networked furniture, living spaces, and clothing on our daily lives and experiences. From his "Remote Home" project that provided an Internet link between the furniture and lighting fixtures in two apartments in Berlin and London, to his "Ticker Chair" which dynamically displays stock market and news information on an illuminated chair. Schneidler's work uncovers striking associations between interior design and external data streams. Gizmodo recently caught up with Schneidler to discuss his past and current projects, and to discover exactly why dynamic information displays need to exist in physical, not only screen-based spaces." Continue reading Networked Furniture and More with Tobi Schneidler; Interview/Article by Jonah Brucker-Cohen, Gizmodo.
January 24, 2007
The FAMULUS project consists of an intelligent modified vacuum cleaner and the FAMULUS server. it is a replacement of the ubiquitous desktop metaphor feature of the trash bin. Instead of moving unwanted trash into the bin, it is sent over to the FAMULUS, that consumes the digital debris for the satisfaction of the user. The FAMULUS features an empirically modelled realistic sounding vacuum cleaning noise that expresses the complexity of the object sucked in. Plug-ins are used in order to hand over as many data types as possible. Currently, FAMULUS can successfully imbibe e-mail messages and RSS feed items. the latter even opens up the chance to automatise the process of digital scavenging. This is the prototype website. You can already send mail directly to FAMULUS and even include urls of RSS feeds in the mail. FAMULUS will download the RSS feed once and trash all information as well as the original mail. The web-interface is yet to come... Also see TRATTI and sevenmileboots.
January 11, 2007
InSense: Interest-based life logging
Mapping First Life Experiences with Virtual World Counterparts
Blum, M. Pentland, A. Troster, G. (2006), InSense: Interest-Based Life Logging, IEEE Multimedia, 13 (4), pp. 40- 48.
The paper describes a wearable data collection device called InSense based on Vannevar Bush’s Memex principles. allows users to continually collect their interactions as store them as a multimedia diary. It basically take into account the sensor readings from a camera, microphone, and accelerometers. The point is to “classify the users activities and “automatically collect multimedia clips when the user is in an “interesting” situation“.
What is interesting is the types of categories they picked-up to develop their context-aware framework: they chose location, speech, posture, and activities—to represent many diverse aspects of a user’s context. They also have subcategories (for instance for location: office, home, outdoors, indoors, restaurant, car, street, shop).
The experience sampling approach works like that:
Subjects wear the system for several hours without interacting with it. Audio and acceleration signals are recorded continuously. The camera takes pictures once a minute and WiFi access points are logged to establish location. After the recording session, the user employs an offline annotation tool, which presents an image at a time, the corresponding sound clip, and a list of labels from which to chooseshowing sensor placement.
What is also curious is their description of their algorithm that calculates the current level of interest of an event based on the context classification.
Why do I blog this? I am less interested in the purpose of the system itself (sharing material) but rather by the data extracted from context readings and how this could be used to tell a story (or to build up a narrative). Of course, given my interest in games, I see this device as intriguing and potentially relevant to map the first life experience with virtual worlds counterparts; it could go beyond current pedometer that control dogs. [blogged by Nicolas on pasta and vinegar]
December 11, 2006
[iDC] Introduction and Blinks & Buttons
I'm Sascha Pohflepp, currently based in Berlin, where I spent the last couple of years at the University of the Arts' (UDK) visual communication department (some of the work from that time can be found at http://www.pohflepp.com). I also frequently contribute to the blog We-Make-Money-Not-Art.com and occasionally work with the Mediamatic foundation in Amsterdam.
My most recent work is a two-fold thesis project, titled "Between Blinks & Buttons". It is a try to look at photography as an increasingly connected process and the implications which arise from that – both for the individual but also for the process of remembering and the camera as an object in itself. Quoting from the introduction:
"Photography has become a networked process. It no longer ends with pasting putting prints into an album. Instead, making them public through services like Flickr is rapidly becoming one of the main ways how we treat our visual memories. The photographic process extends from preserving a moment to an act of telecommunication, with numerous implications on how we perceive reality, how we make our memories and how we create a narrative from it.
The camera itself has become a networked object and through the fusion of the snapshot-camera and the mobile phone, this object will even become more part of our everyday lives. Cameras always have been recorders for their contexts, essentially equipped with a light sensor to capture a visual representation and a pressure sensor for the person who decides which moment will be saved – the button. Furthermore, digital photos come with a great amount of data attached to them as so-called EXIF-tags. These include a multitude of precise information about technical aspects of the shot, the make of the camera, how it was held and when the image was taken.
For almost any given moment since early 2004, it is possible to find a photo on the web which was taken in the exact same second. For every of my photos that I myself have a memory of and an emotional connection to, I can see someone else's moment. I can see what happened in another part of the world while I was doing what I remember when I see it. In that sense, cameras become networked buttons that create a link between two people through the simple fact that they did the same thing simultaneously: press a button. The cameras create a visual trace of it, with time as a reference."
This resulted in two installations – firstly "Blinks", a prism sitting on a kind of light-table that actually refracts the ray of time which metaphorically runs through every photo into all the other moments that had been simultaneously captured in various places all over the world.
Secondly, "Buttons", a camera which intentionally boils down the photographic apparatus to its essential button. Once this button has been pressed, no picture is taken (the person will remember the moment, though), but a networked device inside starts to look for photos from that very same second. It might take some time, but eventually, a photo will show up on the camera's display.
Interestingly, in both pieces, the photos never feel random since the user shares a moment with that other person who took the photo. At a recent presentation, someone called the whole process "momentograph" which I found very fitting and beautiful. Please feel free to let me know what you think of all this, the website mentioned above also has some more information and videos.
iDC -- mailing list of the Institute for Distributed Creativity:: iDC[at]bbs.thing.net :: http://mailman.thing.net/cgi-bin/mailman/listinfo/idc
List Archive: http://mailman.thing.net/pipermail/idc/
November 01, 2006
Networked Objects & The Internet of Things
What happens when 1st & 2nd Life mash up?
Here are slides from my keynote at the Cross Media Week "Internet of Things" session. The talk itself was more extemporaneously authored from an outline and notes than written, but the slides capture the major conceptual beats.
Keynote Outline: From a Social Web to a Internet of Things: What happens when 1st Life & 2nd Life mash up?
Main Points: a. the digital communications network known as the Internet is an instrument of social engagement & exchange, and its instrumentalities (devices, databases, routers, web servers) are part of that social engagement & exchange. When other kinds of objects are "hooked-into" that network, they are caught up in the messy imbroglio of the social life of the internet.
If this is the case, then we should consider these objects as not inert objects, but social actors that shape and inform the kind of discourse that happens on these networks. So, i will refer to the various and diverse social actors amongst these networks (internet, intranet, whatever) variously as "participants", social actors, social objects, and so forth. The reason is to emphasize that anything and anyone within the network has a role to play in the creation of social life of various kinds. More >> [blogged by Julian Bleecker]
September 26, 2006
The camera that takes others' photos
Well, well, i have to thank Julian for the scoop! Sascha Pohflepp is a new media artist based in Berlin. He also writes on this blog and rumours had it that his graduation project at the University of the Art in Berlin was kind of awesome.
His Blind Camera captures a moment at the press of a button. However, the device doesn't have any optical part. The camera memorizes only the time of the picture and immediately searches the net for other photos that have been taken in the same moment. Essentially, it is a camera that only takes photos that were created by someone who pressed a button somewhere else at that very time as its own button was pressed.
After a few minutes or hours, depending on how soon someone else shares their photo on the web, an image will appear on the screen. In a way, it belongs half to the person who had pressed the button and still remembers that moment. Because of that connection, the photos are never dismissed as random, no matter how enigmatic they may be. Video.
Brilliant stuff, Sascha! I'm sure i'll still want to take my own pictures but i'd love to have another one that allows me to do on the spot what i can't help doing when i upload my images on flickr: snooping around to check the photgraphies of other users who attended the same event as me or know the same person or happen to be in town that day. Also by Sascha: Fixr and Eavesdripping. [blogged by Regine on we-make-money-not-art]
September 15, 2006
The "electricity" of future participation
A few months ago, when we started to work on the Architecture and Situated Technologies symposium it took me a while to get into the language (ubiquitous findable objects- UFO, geo-locative systems, context-aware/ ambient/ ubiquitous/ invisible/ continuous/ pervasive computing, locative media) and the unfamiliar context of architecture. I was more into all things networked and did not immediately see the connection. But soon I realized that The Internet of Things offers a captivating angle on the "electricity" of future participation in online sociality; may that be through the hardwired or the wireless Internet.
Over the past months on the iDC list we started to talk about networked objects and "The Internet of Things." Things? Things are not a species of their own making. So, why talk of "things" instead of objects? There must be more than semiotic cuteness at play; the term Internet of *Things* can't just be about anthropomorphizing artifacts, machines, products, and gizmos.
Anne Galloway responds:
The sandbox of the future. Not long ago only few people saw much of a future in reading and writing and video production. It was a consumer's world in which we were all "end-users" (I have to shut up and settle for what comes out of the assembly line. Sterling in Shaping Things, p78) But that has changed with what some call the relationship revolution. There is a participatory turn under way. Bang!!! and you have 100 million MySpace members, 600 billion web pages online, and half of American youth contributing content online. (Well, it was not quite so sudden.) Now, the projection expands to participation/content production beyond the screen engaging humans with networked objects. [continue]
I don't want to side-track an excellent post Trebor, but I would like to reflect a bit if I may on the role that Bruce Sterling, science-fiction author, is having in the current practice of pervasive computing research and design. I bring this up not as a commentary on, or critique of, Bruce Sterling the person, but because I believe that traditional questions about authorship are still relevant. I also consider his conference presentations (including their circulation online), and particularly his book Shaping Things, to be rather active non-human actors in the current development of thinking on these matters.
So, here's my favourite Sterling quote about how Shaping Things came to be:
"I'd been trying to write a novel about ubiquitous computation, a science fiction novel, and it's set, you know, in the mid-21st century, and I'm trying to get it down on paper what it's like to work in an actually functional internet of things, and it's really a kind of serious ideational challenge, I mean it's just hard to make it convincing...I was asked to give a Toyota lecture at Art Center, because I knew people in the faculty..and delivered this sort of impassioned rant, saying, look, you know, I think this is gonna break big, and this is why, and I want you designers to kinda like think about this and help me out. What advice can you give me in kinda doing the background for my science fiction novel? And they just sorta stared for a second and said, well, we can't do anything about that but maybe you should join the faculty."
I love that he couldn't come up with a _convincing_ story. That someone else was willing to suggest that pervasive computing was not only a cultural myth (at least in part) , but not a very good one at that!
But seriously, the man understands that _words do things_. He's a writer and a charismatic performer after all. And even if he wouldn't use these terms or references, I think he definitely gets Wittgenstein's language games, Austin and Searle and Derrida on speech acts, and even (or especially) Callon and Latour's processes of translation.
So he's been attempting to mobilise his words, his ideas, at conferences--and others are trying to mobilise them as well for their own purposes (including coining their own neologisms). And Sterling's ideas are largely science fiction/social theory/design ideas, although I totally agree with your assessment that Sterling's perspectives lack "deep socio-political analysis" and give too much (strangely apolitical) agency to designers and design. Not surprising, I think, since he is not a social researcher and he was supported by a design department while he wrote the book.
So what is it then, I wonder, that Sterling _succeeds_ in mobilising? Is it the narrative and performative aspects of technology? Is it a tension between fact and fiction? Is it a _critical_ perspective? To be honest, I'm not sure. But I do know that it focusses on things and environments more than on people. In other words, I believe that it fundamentally lacks discussion on traditional social and cultural topics like race, class, gender, power (inequality) and, as you also suggest, history. and it makes it very difficult to understand or appreciate my role as a human, as a woman, as a scholar, etc. in this brave new world of his.
Some of the risks I see us attribute to new technologies often rely on a _lack of reflexivity_ in our own positions. Unlike Sterling, I don't believe our goal should be to "revolutioniz[e] the interplay of human and object" but rather first to acknowledge how we've never been as separate from objects as his position requires, and that small revolutions happen everyday.
And now back to our regular programming...
Dept. of Sociology & Anthropology
7th Floor, Loeb Building
1125 Colonel By Drive
Canada K1S 5B6
Ulises Ali Mejias
The thing about the internet of things
While I am not very familiar with the whole 'internet of things' discourse, I recognize plenty of recurring themes to be troubled about. We encounter, once again (but with new buzz words), the argument that new technologies can rehabilitate our relationship to the real and to the social. This time, however, instead of investing our sense of self entirely in the virtual (soooo 1990's), we can invest it in 'things' (human-object assemblages) which populate reality, but which are still interconnected and organized in the virtual. The return of the object or 'thing' would seem to suggest that we are moving away from the idea of the virtual as an alternate realm of reality and towards a more complex understanding of reality as encompassing both the virtual and the actual (thank you, Monsieur Deleuze). However, I fear that our technophilia is obscuring the politics of these virtual-actual assemblages, obstructing the need to critically assess how agency is distributed amongst things connected through the internet.
One possible direction this critique can take is to analyze new (and old) modes of production and consumption in the internet of things. The corporate call to action (there are non-commercial alternatives, thankfully) is that we must break free of the shackles of passive consumption to enter a new era of active consumption organized around networked objects scattered in the 'real' world. To be called an audience is an insult in this age when "the demand side supplies itself," when —given the sanctioned source materials— we can all be producers or re-mixers of the objects we shall consume (what I call 'ultimate consumerism'). The difference is that now we need not be stationed in front of our computers to do so; our regained mobility and wirelessness signals a return to the real. Hurray! The freedom to move around while being invisibly tethered to the market, digitizing things or information about things outside the market and putting them in circulation within it. Needless to say, I share Anne's concerns about the fetishizing of 'things' and about the 'return to the object' as the privileging of objectivity.
What I find most troubling is that the discourse of the 'internet of things' suggests a certain inevitability: the true potential of the internet of things can only be achieved to the extent that it encompasses everything (it is not accidental that the internet of things is an extension of the discourse of ubiquitous or pervasive computing). Shouldn't we question this inevitability? After all, the act of 'outsourcing' (to use Trebor's term) our memory and social functions to internet things is not without political and social consequences: The mobility of us cyber nomads —our ability to detach and re-attach ourselves to reality at will— is usually acquired thanks to the drudgery and exploitation endured by someone else (the call center worker in India, the Cassiterite miner in Congo, the factory worker in Mexico or Taiwan, etc.).
At the same time, our response should not be a blind rejection or phobia of things. There are no more 'things' today than before, nor do we rely more on 'things' now than in the past. I was reading an old essay by Ivan Illich ("Silence is a Commons") in which he basically laments the pollution of silence by new electronic things. While I share some of his concerns, I wonder if an average day is less filled with things for someone living on the fringes of consumerism than for someone living within it. Of course, the differences should be accounted for (from natural things, to things produced by us, to things produced by somebody else), but assemblages of humans and things are not abnormal or evil, a priori. The questions is: If we have always delegated (or in the worst case, surrendered) social agency to things in order to control, manipulate, facilitate, condition, interpret, etc., what functions are the 'things' in the internet of things fulfilling?
I don't think Illich was arguing against new things per se, but against the loss of opportunities to reflect on what is being substituted by or forgotten with the new things, to be critical of new things, and to reject things we find unsustainable. [blogged by Ulises Ali Mejias on IDEANT]
September 14, 2006
[iDC] Architecture and Situated Technologies - September Overture
Thanks Mark for your thoughtful introduction to the September discussions. I've been following with great interest since I introduced myself in July, and now I'd like to share and begin to explore some things that have particularly resonated with me.
You mention the "current status of the material object [and] forms of embodied interaction" and I've often thought about this 'return' to the body and the physical after the (failed?) promises of cyberspace disembodiment. In other words, I see a kind of re-embodiment ethos at work right now in research, art and design practice, and a re-newed commitment to the material. In some ways, then, it seems that the pendulum of technological desire has merely swung to the other side.
Since my first two degrees are in anthropology and archaeology I also have a special interest in material culture. Coupled with my doctoral work in social studies of science and technology, I find this question of materiality to be rather persistent in my research. If you'll forgive my self-referencing, Matt Ward and I wrote a paper recently about some intersections we saw between archaeology and locative media design:
and I continue to imagine a possible future where complex legal battles are being fought over the cultural repatriation of digital artefacts
I mean, I sometimes wonder if our pervasive computing collections (we _are_ still talking databases) will more closely resemble the British Museum (http://www.thebritishmuseum.ac.uk/) or the City Reliquary (http://www.cityreliquary.org/)?
I also wonder about a current fetishising of 'things'. Or how can we 'return to the object' without privileging objectivity? I really disliked the phrase 'the internet of things' when I first heard it, but I've since embraced it as a rather lovely manifestation of a type of contemporary commodity fetishism. Earlier this year I gathered links on the historical development and use of the phrase, and it's not difficult to trace its movements--its inscription devices--from academic-industry research partnerships to popular business and technology publications to popular blogs and back to academic-industry research partnerships.
You may note in that cycle I just described there was no mention of particular technologies, or rather no mention of pervasive computing devices. I think that the material status of 'pervasive computing' today can be profitably distinguished from the material dreams of 'the internet of things'. (The _things_ we're talking about are different in each case.) In actuality, most embedded computing today is used for surveillance/monitoring or 'consumer convenience' and _not_ for participatory media production. I guess my point is that pervasive computing in its most banal and mundane expressions is by definition dull and boring. It's the Oyster card used on the Tube every day, the fingerprint scan to get into the office, the rfid tag stuck on the broom. But that's also _everyday life_ and sometimes boredom is just what people want and need. Plus, I try to never underestimate how inventive people are when it comes to finding hope and joy in small things.
I think that David Lyon got it right when he described surveillance as 'social sorting,' even if his sense of the social was too humanist for my taste. The sorts of structured and stored monitoring necessary for context-aware computing are also matters of spatial and cultural sorting (inclusion and exclusion). I find that we're still not very clear on these notions of 'publics' or 'communities' either, or rather that people are working with a multitude of different understandings that are often glossed over in conversation. Add to this the matter of matter--or things--and we're in a downright voluptuous state, where _things_ often don't co-operate.
So we induce, we create, places where it's easy to associate only with those who share our interests and values, where it's easy to avoid being accountable to, and for, precisely those interests and values that we do not share. And all despite the fact that we pass different others--move through spaces together--every day! (I guess that does trouble me.) At the same time, I'm not sure it's productive to essentially reframe this social and cultural discussion in terms of public and private. I'm not sure that dichotomy is either reliable or useful beyond helping me understand the everyday movements in-between...
In any case, I do continue to focus my work on how we are (re?)imagining our being-together in mobility--and I'm looking forward to our continuing discussions and to meeting some of you at the symposium.
Dept. of Sociology & Anthropology
7th Floor, Loeb Building
1125 Colonel By Drive
Canada K1S 5B6
From Mark Shepard, September 20, 2006:
> I also wonder about a current fetishising of 'things'. Or how can
> we 'return to the object' without privileging objectivity?
I think this is a key question. As Sterling noted at the talk he gave for the Lift conference last March, the phrase "Internet of Things" is a useful one if you are looking for venture capital in southern California. And indeed, the discourse surrounding the convergence of ubiquitous / embedded / context-aware / geospatial / locative technologies finds its (fundable) applications predominately in the commodity object or "objective" control systems for the military- industrial-light-and-magic complex. So it's not surprising that given the circuit running from "academic-industry research partnerships to popular business and technology publications to popular blogs and back to academic-industry research partnerships" produces ideas that feed this fetish for the object and objectivity.
> So, why talk of "things" instead of objects?
Well, for one thing, calling them objects doesn't account for meanings such as "That's another thing entirely", "She knows how to handle things", or "We're just doing our thing." Things are "actions, events and affairs" as much as they are "artifacts." Networked _things_ are not at all the same as networked objects (but they may include them). When we reduce _things_ to objects, however, we limit our ability to consider how _things_ are embedded within everyday life, their meaning contingent upon their use (or mis-use), and the relations they enact or perform.
Take Heidegger's "jug", for example:
> "No representation of what is present, in the sense of what stands
> forth and of what stands over against as an object, ever reaches to
> the thing qua thing. The jug's thingness resides in its being a qua
> vessel. We become aware of the vessel's holding nature when we fill
> the jug... the pouring that fills it flows into the empty jug. The
> empty space, this nothing of the jug, is what the jug is as the
> holding vessel... But if the holding is done by the jug's void,
> then the potter who forms sides and bottom on his wheel does not,
> strictly speaking, make the jug... The vessel's thingness does not
> lie at all in the material of which it consists, but in the void
> that holds." 
Heidegger's understanding of the thing stands in contradistinction to the object. While I find the larger argument he's making in this essay problematic, I do find useful the idea that the "thingness" of the thing doesn't reside in its being a representable object, but rather in the way _things_ bring human practices together and make them intelligible.
Recently I screened Tati's "Play Time" for a group of graduate students. I am always fasinated by the way _things_ for Tati - modernist chairs, glass plane doors (or at least their handles) - carry with them an excess beyond their role as functional objects. With the chairs, for example, that excess is the sound they produce when sitting on them, how the body engages with the acoustic properties of the material, and the (hilarious) social implications of this... This excess often lies in the difference between how things are designed and how they are used, or how they perform in ways not anticipated by their designers.
Dunne and Raby's "post-optimal" electronic objects would appear to take this excess as an opportunity for a reflexive, critical design practice, one that doesn't so much reject the optimizations and efficiencies of Taylorism as it considers them moot: already achieved, and therefore not much of a design challenge. If anything, contrary to Trebor's suggestion, I'd say critical design can play a key role in shaping a future of things that are not invested in "intentionally restricting the way the user can behave, or enforce certain modes of behavior."
> I fear that our technophilia is obscuring the politics of these
> virtual-actual assemblages, obstructing the need to critically
> assess how agency is distributed amongst things connected through
> the internet.
The question of agency here is crucial. But I think it's useful to distinguish between humans and things in actor networks. This might help abate some of the hysteria surrounding the current discussion. That we can see networked things as systems doesn't necessarily mean that these systems can think, act, or exercise power in any subjective way.
Still, subjective human agency is but one form of "being in action or exerting power", and its important to consider how representative democracies, for example, can be influenced by _things_ that are capable of asserting themselves within networked societies.
Take Bruno Latour's Parliament of Things, for example:
> Let one of the representatives talk, for instance, about the ozone
> hole, another represent the Monsanto chemical industry, a third the
> workers of the same chemical industry, another the voters of New
> Hampshire, a fifth the meteorology of the polar regions; let still
> another speak in the name of the State; what does it matter, so
> long as they are all talking about the same thing, about a quasi-
> object they have all created, the object-discourse-nature-society
> whose new properties astound us all and whose network extends from
> my refrigerator to the Antarctic by way of chemistry, law, the
> State, the economy and satellites. 
Or Julian Bleecker's description of Blogject agency:
> Agency as I am using it here does not just mean a local “artificial
> intelligence” that makes a Blogject able to make autonomous, human-
> like decision or fashion croaky human-speech from text. Blogjects
> have no truck with the syntax of human thought. Things could not
> care any less about their Turing Test report card. Blogject
> intellect is their ability to effect change. Their agency attains
> through the consequence of their assertions, and through the
> significant perspective they deliver to meaningful conversations.
> Blogjects bring something heavy to the table. Or, they are brought
> to the table because they have semantic weight. Agency is
> literally imbued in Blogjects. Things that matter completely sully
> the previously starched white relationship between subject and
> object, human and nonhuman. Things that matter inflect the course
> of social debate and discussion, and cannot help inflicting local
> and global change. Witness the Spotted Owl. Witness the Pacific
> Northwest Salmon. Witness all the non-human, non-subject "things"
> that became fully imbued with the status of first-class citizens.
> Heck, most humans don't have the capacity to effect the kind of
> worldly change and receive the same order of protection, status and
> economic resources as a fish.
These networked things are obviously far more than "just pieces of metal and silicon... " and at the same time far less than the hype and hysteria currently surrounding them might suggest.
 Heidegger, "The Thing," in Poetry, Language, Thought, A. Hofstadter, trans. (New York: Harper & Row, 1971)
 Latour, Bruno. We Have Never Been Modern, trans. by C. Porter. Cambridge, MA: Harvard University Press. 1993. p. 144
 Bleecker, Julian. Why Things Matter. 2006. http://research.techkwondo.com/files/WhyThingsMatter.pdf
iDC -- mailing list of the Institute for Distributed Creativity:: iDC[at]bbs.thing.net :: http://mailman.thing.net/cgi-bin/mailman/listinfo/idc
List Archive: http://mailman.thing.net/pipermail/idc/
September 08, 2006
A Sculpture Composed of Simultaneous, Web-Connected Dance Parties
DANCEPOD 2006: On September 9th at 11pm in Portland, 12 am in Guadalajara, 2am in New York City, 3am in Mexico City and 7am in Berlin, dancepod will present an entirely new kind of sculpture. A sculpture composed of simultaneous, web-connected dance parties. The parties, coordinated and developed in conjunction with artists and presenters from each city, will utilize identical dancepod installations. The installations will become the core of a shared physical and virtual experience, supporting streaming video and music as well as live DJ’s, VJ’s, and surprising guest artists. Moving bodies of dancing participants will complete the sculpture.
As part of the Portland Institute of Contemporary Art's (PICA) Time-Based Art (TBA) Festival, in conjunction with PS122, Scene Downtown (Earl Dax), Harkness A/V (Nick Hallett), and technical directors Kraft + Purver, these 4 DANCEPOD sites will be linked by live video starting at 2:00 AM Eastern Standard Time.
NYC DANCEPOD features an eclectic array of New York artists. Join DJs Kevin Graves (Brite Bar) and Van Scott (Patricia Field's Party at element) and dance to visuals artists assembled by Nick Hallett and Harkness A/V, including Chika and Boris. Live performances by John Moran and Saori Tsukada, Sxip Shirey, and Glenn Marla.
Chez Bushwick will be hosting "The Changing of the Garde" starting at 8:00 PM on Saturday. Come early for DANCEPOD and enjoy performances by:
Wanjiru Kamuyu – Spiral
Jim Staley – Solo Trombone
Bruce Nauman - “Abstracting A Shoe (1966)” – Video Art
David Vaughan, Michael Cole, Jonah Bokaer – A John Cage Birthday Reading
Elke Rindfleisch - Untitled
Also... TECHNOPIA Interviews Carla Peterson, Newly-Appointed Artistic Director of DTW.
All this for $5. Come to both the Chez Bushwick and DANCEPOD events for just $10! That's like seeing the early show for FREE! (Regular DANCEPOD admission is $10).
3rd Ward Brooklyn
195 Morgan Ave.
Bushwick, Brooklyn, NY 11206
August 08, 2006
Chit Chat Club
A Teatime Telepresence
Chit Chat Club is an experiment in bringing together the cafe and the online world. Chit Chat Club is now live. In a cafe in downtown San Jose, three inhabitable sculptures sit at tables, waiting to be brought to life. One is an undulating form, another a chair just starting its metamorphosis into human form, and the third is a giant spoon. Be one.
Cafes are social spaces - they are places where people come to converse, to meet friends, to people watch. It is a physical space, rich with the smell of coffee and the sound of chatter, a navigable place that people must negotiate to find good seats, to see and be seen. The cafe is local, fixed in space and reflecting and defining the social structure of the neighborhood.
The online world is also social, but far less physical. Its inhabitants have no bodies, its borders are porous. It is global rather than local. Yet there is a different kind of depth that exists in the online world, a depth of persistence, of vast collections of data, of conversations that remain permanently archived, unlike the ephemera of the spoken word.
Conversing online while in a cafe is not of course a novel idea. Many cafes today feature wireless access, and patrons read blogs, write email, instant message, etc. Yet this arrangement moves the patron's attention from the public and physical space of the cafe to the private world of these typed interactions.
Chit Chat Club brings the online visitors into the public physical space of the cafe. It does this by both providing them with a view of the cafe and by giving them a physical presence with it.
August 03, 2006
iDC: Architecture and Situated Technologies
the "internet of things"
Welcome to the August edition of the Architecture and Situated Technologies installment. Thanks Trebor for warming up the netwaves. I wanted to pull together a few threads that were begun in July and steer the discussion towards the objects that we as producers of things will be making. In light of the "internet of things" I thought it would be worthwhile to collectively characterize through examples the nature of these things. Will their mode of production differ from the way we have previously made things? Will they have agency beyond immediate service to users? What are some of these? How will these objects work at the scale of architecture? technically and socially? Will there be a further individualizing of ownership or will new types of public access/ownership to these objects emerge?
Bruce Sterling in his brief pamphlet manifesto Shaping Things …a good summary of the whole thing which unfortunately lacks a bibliography… it would be nice to come across a manifesto with a bibliography…calls these objects "spimes". Setting aside the proselytizing Sterling makes a lucid observation of how information and the material object will become so enmeshed that: "you care little about the object per se; that physical object is just a material billboard for tomorrow's vast, digital, interactive, postindustrial support system."
The other useful insight he makes is that the confluence of material objects (atoms) and information (bits) will give us incredible insight into an objects mode of production. We will be able to track its entire life cycle…construction from raw materials, uses and misuses and finally its obsolescence and post-existence. This will provide us a means to properly manage resources ultimately leading to a "sustainable" society.
An extraction from Rob's post:
"The most interesting thing about it will be that you won't be able to see it all at once because all these data structures, computational devices, digital networks and cyberspaces that are built upon those components will be invisible unless you have the password or unless you are a member of the group that is permitted to see them”.[iii] In such an environment, - a truly magic one - people themselves become information spaces. Building, cars and people become information spaces."
Access to these information spaces is especially interesting. The carving of the network through group membership and password protections is inevitable as people are drawn to creating collectives. But what about "non-biological" things? If agency is to be given to them what kinds of collectives might they (in)(de)form? What would characterize their performances? Purpose?
From tiziana's post:
"the October symposium interesting because it addresses exactly this feature of this type of communication that I am interested in - communication not as an exchange of messages between subjects, but communication as an environment - a field of interacting effects. There is a whole set of terms that have been used historically to talk about the many - peoples, mobs, masses, crowds, populations, publics, multitudes. I have come to believe that the interconnection of bodies/minds present some really interesting features which are biological without being reducible to what we might call a 'human nature'. An inorganic biological, feeling, sensing, perceiving, intelligent mass (?) - a biomass that constitutes on the one hand the field of interacting effects with an autonomous logic not reducible to individual actors and also a surface for experimenting with strategies of manipulation of affects, percepts and ideas (for publicists, marketing experts, but also activists, artists, engineers, architects, designers etc)."
The topology of self-organization to understand the behavior of these "things" is helpful. Parts and wholes are related but not easily reducible from one another. But with totalizations like these the specter of surveillance and control as pointed out by Grant is not far behind. As an architect I find the "biomass" visualization helpful, some way to grasp the "you won't be able to see it all at once". Also I think it points to the spatial turn the these "things" which on one level are conceived as tools but instantiated themselves as environments:
From mark's post on cellphones:
"Kenichi Fujimoto refers to the devices themselves as "territory machines" capable of transforming any space -- a subway train into "(one's) own room and personal paradise." While late 20th century (and predominately western) notions of the Internet promised to unlock us from the limitations of offline relationships and geographic constraints, keitai space flows in and out of ordinary, everyday activities, constantly shifting between virtual and physical realms."
"The problem for me is that these new technologies are not tools which can be used by autonomous agents in one way or another depending on context and will. They constitute an environment, a milieu, a field of effects, an assemblage a zone of indistinction between natural, social and technological components and effects. Can you say an environment is liberating or oppressing depending on how you use it? Is it a legitimate question to ask of an enviroment?"
Is this fluctuation from tools to environments simply a consequence of digital technologies? Or is there a paradigm shift in our expectations of the objects we produce?
“This way of thinking about architectural systems is not necessarily technological: it is not about making your online shopping experience more efficient, or your apartment funky and interactive. Nor is it about making another nice piece of hi-tech lobby art that responds to people flows through the space (which is just as representational, metaphor-encumbered and unchallenging as a polite watercolour landscape). It is about designing tools that people themselves may use to construct (in the widest sense) their environments and thus to build their own sense of agency. It is about developing ways to make people themselves more engaged with, and ultimately responsible for, the spaces that they inhabit. It is about investing the production of architecture with the poetries of its inhabitants.”
And john’s response:
This is the key: what tools? Can we look to rapid prototyped houses like those being designed/proposed by someone (name escapes me). Are we talking about the company in Calgary (again, blanking on the name) that lets you design your house online from a wide selection of elements and then they ship you the parts, a cross between prefab, lego and simcity? Or about responsive emotive materials (i.e. Oosterhuis' Hyperbody?) Or other stuff entirely? I'm very intrigued by the possibilities but if you're going to promote do-it-yourself space design in the physical world you have to deal with hard realities like materials, costs, durability, the elements, bylaws, lived use, walls falling down – things that simply don't have the same urgency in virtual space. So the question is whether the comparison with p2p media is apt given the constraints of actualizing that fluidity in the material world. What is the material-world architectural equivalent of an mp3 - or of this listserv? I have a feeling there are some really interesting answers available but I'm not sure what they are.”
Thanks for the question john. Perhaps the architects among us may want to take a crack at it.
I look forward to this discussion.
August 02, 2006
Earworm Assault Devices
Weapons of Mass Distraction
Prologue: Sound is unavoidable (the ear can't be shut like the eye) and evolutionary hardwired into our cognition apparatus, it can reach into the subconscious and arouse emotions. A well made sound can get stuck in the head of the hearer and loop for a varying time span. The infected person oftentimes repeats the sound involuntarily by singing, humming or whistling, eventually spreading the earworm like a virus. Messages can be modulated onto earworms like radiowaves on a carrier frequency, as a result earworms can be used to distribute messages or manipulate the individual. This fact is recognized and commercially exploited in various fields of modern life - like pop music, cellphone ringtones, audiobranding or elevator music to name a few.
Global brands are using huge budgets to propagate their earworms by means of multiple costly channels, the respectable citizen remains defence- and powerless. Until now:
The Earworm Assault Devices (EAD) enable individuals or small groups to distribute their own earworms. Small pieces of highly infectious sound or music can be recorded onto the devices and fired at single or multiple persons, eventually spreading your message out into the world. They come in different shapes and sizes and are affordable for small budgets. The EWGL-1 [Earworm Grenade Launcher v.1] is one of several models currently available:
Technically the most advanced development within the Earworm Farm program. This caterpillar-driven tank is extremely agile and can be stopped by almost nothing. It is equipped with an on-board camera and a grenade launcher with a range of approx. 3 meters. The grenade is loaded before deployment (max. earworm duration 12 seconds) and the vehicle can be guided into the target area by remote control. The camera signal is radio-transmitted and gives the vehicle an extraordinary broad reach. Upon arrival at the target site the grenade is launched by pressing a button on the remote control and the earworm is triggered and looped on impact.
The EAD were developed under the umbrella of the Earworm Farm Project for the CyNetArt Dresden with generous support by the Ministry of Science and Arts, Saxony. See also: Roman Kirschner's EarwormCollider for the creation of earworms. Related Stuff: Photos of the presentation during CyNetArt 05 :: The_Earworm_Farm_Project.pdf [via Pixelsumo]
July 26, 2006
Upcoming workshop: @ Mediamatic Amsterdam
RFID and The Internet of Things
After a succesfull CrashCourse in May, Mediamatic now presents a second workshop on RFID and The Internet of Things :: 11, 12, 13 September 2006 :: Confirmed lecturers and trainers: Julian Bleecker (US), Timo Arnall (Norway) and Arie Altena (NL).
RFID allows for the unique identification of objects, and any kind of online data can be linked to these unique ID's. If RFID becomes an open web-based platform, and users can tag, share, and contribute content to the digital existence of their own places and objects, we can truly speak of an Internet of Things. This opens perpectives for new sustainability scenario's, for new relations between people and the stuff they have, and for other locative applications.
The participants of this workshop will develop critical, utopian or nightmarish concepts for an Internet of Things in a hands-on way. Ideas can range from scripts for small new rituals to outlines of societal changes of epic scale. Prototypes can be tested with the workshop tools The Symbolic Table or the Nokia3220 phone with RFID reader.
The workshop has room for 16 designers, artists, thinkers and makers. Participation fee is €350 per person, ex BTW. Lunches, technical equipment and assistance are included. If you want to participate in this workshop, please register at our online registration form.
July 20, 2006
Morphing Objects and Transforming Interfaces
...Talking to the Hand: an exploration into the interactive qualities of shape shifting technologies by Oren Horev, explores morphing objects and transforming interfaces. The project is driven by the thought that by changing their shape, computers and digital objects can provide the user with a clear understanding of their status, both tactilely and visually.
The first object developed to illustrate the concept is the cube-shaped InSync hard drive. It indicates the level of synchronization (percentage) with the source computer by twisting itself, misaligning its shape. The less overlap between the two file structures, the more the hard drive twists. To synchronize the drive and computer the user either clicks a button on the screen or nudges the mechanism itself to initiate the alignment.
The second object is a a sort of mouse / touch panel hybrid called Terrain of Information. Moving the touch pad controls the on-screen cursor. Simultaneously the surface of the touch pad morphs in relation to the objects and applications being pointed at. It shows hidden qualities of an object, a big or small bump, for example, represents the size of a folder, while a rhythmic movement indicates the beat of an Internet radio station. The prototype is composed of a pin-pad and a screen application. The pin pad morphing is emulated by a micro-controlled robotic arm running beneath. A processing application (Java) manages the icons on screen and the kinetic calculations.
The last prototype is the TactoPhone with an active 3D surface that animates and displays shapes (video). The phone interface allows for interaction with location based information “on the go” and “In the pocket”. Video scenario 1: the user sets his phone to alert him of nearby events; video scenario 2: the phone tracks the location of people.
Talking to the Hand suggests that shape shifting technologies could bring our current familiarity and comfort with physical objects into the virtual world, creating a more integrated experience.
July 05, 2006
The MagicBook + The Book Radio
What's New in Books
The MagicBook explores seamless transition between reality and virtual reality. When users look at the pages of a real book through a hand held display, they are able to see virtual content superimposed over the real pages, that is augmented reality. When they see an augmented reality scene they like, users can fly into the scene and experience it as an immersive virtual environment. Currently the user can transition smoothly between these two fixed viewing modes: the augmented-reality view and the virtual-reality view.
The MagicBook also features functionality that supports collaboration: When several users look at the same book page they can see the augmented reality image from their own viewpoint. When one of these users decides to switch to the Virtual Reality mode and “fly” into the virtual model, the other users are able see him or her as a virtual character in the scene.
The Book Radio is a design project by Vinay Venkatraman & Pei Yu from the Interaction Design Institute Ivrea. The Book Radio appears to be a regular looking sketch book, but sensors and a microcontroller embedded in the spine allows its users to actually scan radio channels just by flipping the pages of the book. Book marking of radio stations and annotation of favourite programs by writing on to the relevant pages are also supported, and the volume can be changed by sliding the associated book mark.
The Book Radio is wireless and holds no screens, dials or buttons. Thus, the project is concerned with exploring the possibilities of creating intuitive and subtle interfaces.
June 07, 2006
Networked things and the old/new objectivism
"...In any case, what strikes me in this "renaissance of things" is the creeping tendency to fetishise (to reify?) the object. This happens too in social software, and user-centred design. Both 'the social' and 'the user' become paramount, yet remain unexamined. More people are citing Latour's influence, often summarised along the lines that objects have agency too, which is technically correct. But Latour isn't interested in objects, he's interested in relations - in actant-networks, collectives of humans and non-humans, and processes of translation.
If we actually follow Latour, or any of the critiques of ANT, then it's not the things themselves that are interesting, but rather the imbroglios they comprise. Julian and Nicolas suggest this when they claim "a new kind of digital, networked ecology in which objects become collaborators," but objects have always been collaborators. The word 'object' comes from Latin 'to throw in the way,' which may explain why people fall back on the idea that we now need to integrate all of these objects into our understanding of the digital. But, at the risk of stating the obvious, the digital is always already material and real. So why a "renaissance" at all?..." From Networked things and the old/new objectivism by Anne Galloway [blogged by Anne on Purse Lips Square Jaw]
June 02, 2006
Experimenta @ FACT
Under the Radar
Kill your television! Get ready for a totally wicked exhibition. Experimenta Under the Radar invites UK audiences to experience and interact with the high voltage works of Australia’s best and most innovative media artists: 16 June - 28 August, 2006.
FACT, the Foundation for Art & Creative Technology is the UK’s leading organisation for commissioning and presentation of film, video and new media art forms. FACT exists to inspire and promote the artistic significance of film, video and new and emerging media.
ZiZi the Afffectionate Couch (2003) :: Stephen Barrass, Linda Davy & Kerry Richens: An invention inspired in equal parts by a shaved poodle, a fluffy Persian cat, and an exotic alien sea slug, ZiZi growls when sat on, purrs when touched, and emits soft groans of delight if you stroke her long fur. If left alone, ZiZi mews for attention. ZiZi is an affectionate ottoman couch that asks for emotional support while offering physical comfort. The responses are triggered by touch-sensitive cables, which are sewn into the fluffy upholstery of the couch. While Barrass claims that ZiZi’s personality is based on his pet Dalmatian Ziggy, we can safely assume that the artist does not sit on his dog.
Op Shop (1999-2002) :: Stephen Barass: Cluttered from floor to ceiling with bric-a-brac, Op Shop is a virtual environment that is transformed by sound. Unlike most interactive interfaces where communication occurs by means of a stylus, mouse or other device, Op Shop encourages collaborative interaction. By singing into the microphone a visitor can disturb the virtual objects; a high-pitched wail can shatter glass. By singing and sustaining a low note, they can smash a table. An ever-changing choral score emerges as multiple participants explore the effects of their vocal range. This cacophony of hums, croons, shrieks, claps and whistles, punctuated by percussive shattering and laughter, propels the visitor towards an unexpected finale.
Dislocation (2005) :: Alex Davies: What happens when we can’t trust what we see with our own eyes? As visitors peer into the portals of Dislocation, they become auto-voyeurs seeing and hearing unnerving scenarios play out behind them only to discover the room is empty when they turn around. Ingeniously programmed by Davies to overlay pre-recorded sequences with realtime footage, the simultaneous presence and absence of these phantoms defies rational thought and experience, and creates a haunting atmosphere. The work creates an environment of deception and uncertainty whose subtlety and cunning displaces our reliance upon, and trust of, our own distinct emotional responses to the presence of others.
May 26, 2006
Webvisions 2006 Keynote: The Naked Interface
Liberating Brain, Body and Digital Interactions
Keynote: The Naked Interface - Liberating Brain, Body and Digital Interactions by Luke Williams: Friday, July 21, 1:00 pm – 2:00 pm.; Webvisions 2006, Explore the Future of the WebJuly 20 to 21, 2006 at the Oregon Convention Center in Portland, OR.
Throughout the electronic age, people have become accustomed to interacting with digital media indirectly, mediated through screens and peripheral devices. But now, as digital technology becomes invisibly embedded in everyday things, the "feeling" of everyday things is also increasingly becoming embedded in digital technology.
In many senses, physical objects are becoming more important. In an immediate way, they can help us define new systems of relationships with digital information. This presentation will examine how perceptions and gestures formed through our experiences with physical products can effectively bring liberty to the relationship between brain, body and digital media interface.
What the audience will learn: :: How patterns and archetypes from product design now frame new ways for people to orientate themselves around information. :: The principle of stimulating one sense through another to create multi-sensory interactions. :: New developments at the collision point between "real world" objects and "digital interfaces" – the touch screen.
May 25, 2006
Mawhrin-Skel--by Deanne Achong, Kate Armstrong, Joelle Ciona, David Floren, and Matt Smith, with help from Dina González Mascaró--brings together an eclectic group of local Vancouver artists, who work variously in new media, electronics, sculpture, installation and performance, to create autonmous robots that communicate with one another wirelessly via the internet.
The project--Sheryl [Crowbot] (DA), The Problem of Other Minds (KA), TangleBot (JC), Phono, Mono, and ChartBot (DF), RadBot (MS)--is based on a fictional character--"Mawhrin-Skel"--an intelligent drone that, having failed to meet the conditions of its original purpose, is decommissioned and left to wander aimlessly through a near utopian environment where it becomes a social nuisance and prankster. This character - invented by Ian M Banks in his 1989 novel "The Player of Games" - provides an interesting social and cultural entry point into the study of robots as both cultural artifacts and autonomous members of society.
Robots typically have industrial applications - wireless mines that can dig their way out of the earth and move to a "better" location, machines that cleanup radioactive waste or other hazardous material, surveillance equipment, toasters, coffee makers, etc. It is unusual to build a robot that doesn't have an overt industrial purpose - it may be decorative, dysfunctional, nailed to a tree and bleeping. It exists purely to raise questions about industrial and technological philosophies and ethics in our society. This project examines ideas of function, autonomy, artificial intelligence and purpose-driven technology.
These objects are intended to sit on window sills, desk corners, over doorways, nailed to a post on the back deck, in the gravel pit in the basement, etc. The wireless Internet connection allows the devices to talk to each other and mingle their conversations on the web. The "eyes" of one machine can influence the actions of another. Keywords can generate furious activity or silence. Following the series of workshops, the results of the artists' experiments with robots will be exhibited to the public through a number of events in May 2006 and January 2007.
The Mawhrin-Skel robots communicate with each other using the Scrambler - a message server that was developed in 2003 to connect electronic installation works around the world.
May 23, 2006
Pimp my Heart
HBBB (HeartBeat Bass Booster)
Pimp my Heart is a system that amplifies the heartbeat of a car driver in real time through a special interface with a beefed up car audio aftermarket system, to achieve an ultimate unity between car & driver. The Pimp my Heart system consists of a 'heartbeat bling bling' sensor that is clipped to the ear lobe, & a 'heartbeat bass booster' that amplifies the heartbeat sound with its high power amp & subwoofers. the BPM of songs is adjusted by the heart beat rate in real time. A computer LCD monitor on the car displays a visualization of the heartbeat sound in real-time. see also heartbeat bracelets & heartbeat water bowl & heart songs & heartbeat virtual worlds. [takehitoetani.com|via we-make-money-not-art.com] [blogged on information aesthetics]
GAM3R 7H30RY 1.1 is live!
McKenzie Wark's Networked Book
The Institute has published its first networked book, GAM3R 7H30RY 1.1 by McKenzie Wark! This is a fascinating look at video games as allegories of the world we live in, and (we think) a compelling approach to publishing in the network environment. As with Mitch Stephens' ongoing experiment at Without Gods, we're interested here in a process-oriented approach to writing, opening the book up to discussion and debate while it's still being written.
Inside the book, you'll find comment streams adjacent to each individual paragraph, inviting readers to respond to the text on a fine-grained level. Doing the comments this way (next to, not below, the parent posts) came out of a desire to break out of the usual top-down hierarchy of blog-based discussion — something we've talked about periodically here. There's also a free-fire forum where people can start their own threads about the games dealt with in the book or about the experience of game play in general. It's also a place to tackle meta-questions about networked books and to evaluate the successes and failings of our experiment. The gateway to the forum is a graphical topic pool in which conversations float along axes of time and quantity, giving a sense of the shape of the discussion.
Both sections of GAM3R 7H30RY 1.1 — the book and the forum — are designed to challenge current design conventions and to generate thoughtful exchange on the meaning of games. McKenzie will actively participate in these discussions and draw upon them in subsequent drafts of his book. The current version is published under a Creative Commons license.
And like the book, the site is a work in progress. We fully intend to make modifications and add new features as we go. Here's to putting theory into practice!
May 22, 2006
Keeping Distributed Groups Connected
Monkey Business is a system that attempts to keep distributed group members more connected and aware of each other's activities; the system aims to facilitate informal and spontaneous communication, while minimizing interruption at inopportune times. The system consists of a network of animatronic agents, one of which will reside in the office of each member of a distributed group. We have chosen the embodiment of a monkey for the form of these agents; hence Monkey Business as the title of this project.
The agent uses a combination of microphones and sensors to recognize the activity in the office that it occupies. If there is a change in the state of the office activity, the agent broadcasts the information out to the network of other agents. The other agents, through subtle gestures, movements, and sounds, indicate the changes of state of the broadcasting office. Thus all members of the group, through their respective agents, are made aware of each other's activities in an ambient manner. more...
Also see Monkey Business: Creating social awareness among distributed group members, using a network of animatronic agents by Rachel Kern and Toward Lighthearted Mobile Non-verbal Expression by Rachel Kern, Chris Schmandt, and Paulina Modlitba. [via Jim Downing on Smart Mobs]
May 16, 2006
Gizmodo Interview/Article by Jonah Brucker-Cohen
"With an ever-increasing amount of technology intended to “improve”, “augment”, and / or “add convenience” to our busy lives, there seems to be less of an emphasis on creating devices to reflect or comment on our natural or built environments. Taking this challenge as a starting point with her work, Chicago-based artist, Sabrina Raaf, examines the seemingly “invisible” elements of modernized and technologically equipped spaces by re-interpreting this covert data through mechanized objects that create feedback in the form of sound or other visual outputs. From exploring live data sets in the immediate gallery space with “Translator II: Grower”, a robot that measures carbon dioxide levels and draws corresponding blades of grass on the wall, to exploring the tension between humans and adaptive or automated systems with “Dry Translator”, Raaf’s work exposes the unspoken conflicts between society’s push for technological autonomy and the struggle to retain human emotion and sensibility. Her most current work, “Icelandic Rift” comments on the almost “alien” nature of future forms of agriculture that could exist in zero-gravity environments.
Gizmodo recently caught up with Raaf to discuss her unique and calculated artistic approach to creating work that not only challenges common perceptions of technological utopia, but also examines just how deeply we’ve become entrenched in high-tech fetishism." [via coin operated]
WiFi Rabbit Opera -Flash Mob
Regine pointed me on this cool event: a opera for nabaztag at the Web Flash Festival on May 27th. It’s a kind of flash mob with communicating artifacts: 100 Nabzatags (Wifi rabbits) brought by their owners will play an opera created by Antoine Schmitt et Jean-Jacques Birgé at the Centre Pompidou in Paris. [blogged by nicolas on pasta and vinegar] [Related]
Google Translation of text [French to English]: The technical constraints are sometimes determining to compose. Thus the Nabaztag rabbits which will be present on the scene of the Web Flash Festival on May 27 are unable to play together in a synchronous way, they shift over one 10 seconds duration. It is enormous, since if we send only one note in wi-fi to them the result will be 100 repetitions of this played note on average all the tenths of a second, like a tremor of mandoline!
We obtain effects interesting of Brownian movements, magmats moving of small crystalline notes, the stamp chosen for the first movement being the ordinary one of Nabaztag, a kind of glockenspiel. To send a melody to the format midday will thus produce effects of agreements changing slowly with the liking of the intervals height and duration. But our rabbits have their own behavior and can choose between such or such file midday. Option of which we will use briskly in the second movement to constitute a succession of agreements more or less consonants starting from monodic melodies, this time made up of long notes. Long random agreements thus succeed the first movement whose irritated bunches alternate with silences where the only sound comes from the choreography of ears of the pack.
For the third movement, we plan to reproduce extracts of works of classical music, completely transformed by the effect of dephasing of the system. The passages selected by our care, but whose 100 rabbits will use with their discretion, are initially made up of short sentences of introduction to end in a collection of coded which will end up putting all our small beasts of agreement. Luminous choreography will complete the table, also retransmitted on large screen in bottom of scene.
The result will depend on testable simulations with the small program which is manufacturing Antoine…
I deliver below the first official statement sent to the press:
Nabaz' mob, opera for 100 communicating rabbits of Antoine Schmitt and Jean-Jacques Birgé
Coproduction Web Flash Festival and Purple (creators of the Nabaztag rabbit) An original initiative of Guylaine Monnier
Answering the Purple call of the company, 100 Nabaztag rabbits brought by their respective owners give each other appointment, in the spirit of the flashmobs, on the scene of the Center Pompidou to interpret, all together, an opera especially composed for the occasion by Antoine Schmitt and Jean-Jacques Birgé.
Convening John Cage, Steve Reich and Conlon Nancarrow, this open musical and choreographic partition in three movements, transmitted by wifi, exploits the tension between communion of the unit and individual behavior to create an at the same time strong and engaged work.
On May 27, 2006 with 20h
in the Center George Pompidou (Paris)
Evening of opening Flash Festival
(free entry within the limit of the places available)
May 10, 2006
a network of books
Connections + Actor-Network Theory
Connections is the outcome of a reflection which began in the early Nineties, at the time of the war in the Gulf.... At that time, the operations Desert Storm and Desert Fox, preceding the last operation which could be named "Desert, full stop", established the era of a media oriented war, therefore a war of image, on the very spot of the Revelation, that of the three sacred books, a historic place dedicated to communication. They clearly showed the lack of means of communication and even the lack of communication power of the Arab countries as well as the resurgent fear of technology.
In our calendar, that of Hegira, we are today in 1420, eternally nomads. Our roots are clearly set in the future, as the Arab poet Adonis wrote it. For me, it is an attempt to enter this desert, this collective memory, to remove sand from objects which may lose their identity through the changing of material but will still keep their memory.
A recent comment from Adam Greenfield, author of the just-published "Everyware: The Dawning Age of Ubiquitous Computing," seems apropos:
I've become all but unable to think of the objects around me except in terms of Actor-Network theory, as sort of depositions or instantiations of a great deal of matter, energy and information moving through the world. And of course, a book is nothing but a snapshot in that regard; you have to do a lot of extra work if you want to prise out and examine the flows it is a part of, or even those it has set up. [posted by ben vershbow on Future of the Book]
April 27, 2006
Ceramic Sculpture meets Bluetooth Hacking
(A)Bugs is an interactive ceramic sculpture --at a time when working with ceramics is regarded as "totally lame, not hype at all you know!"-- but also an ironic game on dogmas.
The piece uses the Bluetooth capabilities of visitors' mobile phones to adress them individually as they wander around the exhibition space by using or perusing their electronic identity, and getting their attention towards what has now become a common practice - invisible surveillance. Surveillance has been a major theme of the XLRMX.ORG collective (ChaosCam is an amusing proof of that) which will be present there. The sculpture actually "talks" to the viewer with speech synthesis, engaging him/her in subjects depending on what it can gather on the viewer and where s/he is located (reach, away, stays close, etc.)
Refusing to take side, the sculptural work is central to the critique of dogmas.
"Obscurantism is not only present in religion and politics with their fundamentalist or totalitarian surveillance dead-ends, it's also something that we have inside ourselves. Playing with these themes is important if one wishes to exercise one's criticism and not fall into the easy 'them vs us' kind of attitude. It's also about admitting we're human and that sometimes, we slip, we make mistakes. That's why a bug is here for. And we can do something about it." said Philippe Langlois commenting his work.
By Philippe Langlois and the XLRMX.ORG collective.
[blogged by Regine on we-make-money-not-art]
April 19, 2006
By breathing on the Infobreath "cybernetic" lilly, people trigger a flurry of text, making visible the wireless internet traffic passing through the air. Rigged with a breath sensor and connected to the Carnivore sniffer, the flower is cued in to the wireless network flowing in the space surrounding it. The plant absorbs and analyzes this information, and releases those packets in a form comprehensible to humans.
A piezo breath sensor registers the breath. The microcontroller attached to the plant converts these signals for the computer and the carnivore client captures network traffic through the computer’s wireless card. 4. the int
Infobreath imagines a world in which the carbon dioxide we exhale carries comprehensible information, and envisions the transfer of carbon dioxide to oxygen within a plant as a transfer of information: an information ecosystem. It imagines the plant, buffeted by streams of wireless data, sifting through those pings and packets for the few elements sent from one human to another, and reflecting those living packets of internet data back to us, in an elemental attempt at communication.
Folk Tradition and Technology
A shofar is a ram's horn used as a musical instrument for religious purposes in Judaism. Mentioned in Hebrew Bible and throughout the Talmud, it is played during high holy days of Rosh Hashanah, the beginning of the repentance period, and Yom Kippur, the atonement. Whilst similar to the sound of a trumpet, the sound produced by the shofar is not strictly musical, it rather provides a call to reflection and an alarm. In the digital era there is who has thought how to modernize the instrument through technology. The idea belongs to Bob Gluck, an American composer, researcher and performer who, using processors, sensors and a custom designed Max/MSP has transformed a simple horn into an eShofar: Electronically expanded, digitally processed ram's horn.
Nowadays there are two versions. The first one, connected to a sensor glove, focuses on finger movements as inputs that create sound. While the second one uses complex algorithms to create a chaotic improvisational system based on the live performed sounds. Gluck's goal consists of encouraging people to cross boundaries between conventional and new musical aesthetics, traditional cultures and modern life, religious and secular sensibilities. Whenever the live performance keeps the ceremonial attitude, listening to the random digital sequence of tekiah (the deep sound) and teruah (the trill), on streaming video is not different from listening to many other electronic experiments. The value remains in the crossover of folk tradition and technology that modernize the past and sparks the creative imagination. Valentina Culatti, NEURAL.
March 28, 2006
Mawhrin Skel is youRobot-f(r)iend
Intelligrnt Drone Lurks About
Mawhrin-Skel is a fictional character - an intelligent drone that, having failed to meet the conditions of its original purpose, is decommissioned and left to wander aimlessly through a near utopian environment where it becomes a social nuisance and prankster. This character - invented by Ian M Banks in his 1989 novel "The Player of Games" - provides an interesting social and cultural entry point into the study of robots as both cultural artifacts and autonomous members of society.
In related workshops at Vancouver BCs Western Front, the artists will customize circuits by altering the language of the controller, adding sculptural components (static and moveable) adding sensor components, and designing behaviours. These objects are intended to sit on window sills, desk corners, over doorways, nailed to a post on the back deck, in the gravel pit in the basement, etc. The wireless Internet connection allows the devices to talk to each other and mingle their conversations on the web. The "eyes" of one machine can influence the actions of another. Keywords can generate furious activity or silence.
Robots typically have industrial applications - wireless mines that can dig their way out of the earth and move to a "better" location, machines that cleanup radioactive waste or other hazardous material, surveillance equipment, toasters, coffee makers, etc. It is unusual to build a robot that doesn't have an overt industrial purpose - it may be decorative, dysfunctional, nailed to a tree and bleeping. It exists purely to raise questions about industrial and technological philosophies and ethics in our society. This project examines ideas of function, autonomy, artificial intelligence and purpose-driven technology.
March 16, 2006
Tangible Weather Channel
Tangible Weather Channel is a sculptural apparatus that enables the participant to input the remote location of a loved one and interprets its real-time weather information as a way of creating an emotional connection. Rather than employing traditional graphical representation, Tangible Weather Channel renders weather information into a multi-sensory experience by using natural elements such as water, air and sound. By materializing weather dynamics on intimate sites to mediate what occurs in another place, Tangible Weather Channel encourages the participant to establish links with his/her experiential memories of a specific place and to create a sense of closeness to via touch and contemplation.
The capability of creating a continuum between the physical and virtual through media technology has implied a new relationship among the body, perception, space and time. From an architectural perspective, the physical envelope has the tendency to evolve itself into a portal connecting our bodies with other networked spaces and liberating ourselves from the captivity of the physically-bound surroundings. From a phenomenological perspective, our perception of "now and here" might just as well be "now and there", in both temporal and spatial senses. Tangible Weather Channel explores these architectural and phenomenological potential and implications. It also investigates the experiential and performative aspects of information representation, and interrelationship among material, meaning, memory and perception. [via information aesthetics]
March 15, 2006
Report from the Blogject Workshop at LIFT06
Being "in" a Pervasive Network
"As the Internet pervades more physical space and more social space it is likely that objects in the world will become able to connect to the network and participate in the web by disseminating and receiving data communications. As “things” participate within the Internet and once the Internet soaks through physical, geographic space a differentiated kind of Internet may arise. The Internet of Things sets up a different set of relations to social practice (we will be “in” a pervasive network) and a different set of relations to space (the Internet will be co-occupied by both social beings and things.) This shift generates new possibilities for integrating networked things into the Internet. This workshop addresses this shift by considering its characteristics in relation to an existing, prevalent set of practices and technologies currently in existence variously referred to as “the social web” and “Web 2.0.” We then proceeded into four groups to conduct design scenarios in order to further explicate our understanding of a world in which things are connected, networked participants within a pervasive, wireless, mobile Internet. We conclude that there is a significant opportunity for designing compelling usage scenarios for such a near-future Internet of Things world and recommend a follow on, intensive, multi-day workshop/retreat to continue contributing to this important topic." From Report from the Blogject Workshop at LIFT06 [blogged by nicolas on pasta and vinegar]
March 06, 2006
NEC's "KotoHana" LED flower
knows how you feel
We were just saying to ourselves the other day, "man, if we just had an LED flower that could sense our emotions... then we'd have it all." Well dreams do come true, because NEC is going to be presenting their KotoHana (the talking flower) at CeBit this year. It's pretty hard to tell what's going on here, but there seems to be a "Sensibility Technology" that recognizes the user's feelings, and then tells the flower over a wireless connection. The system works over the Internet, so even from far away the flower's LEDs can light up to reflect your true feelings to that special someone. [posted by Paul Miller on Engadget]
February 27, 2006
ROBOTIC ARTS, INFLATABLE AESTHETICISM
Uncovering the Primacy of Movement and Sound
EXPERIMENTAL ART FOUNDATION and the ADELAIDE BANK FESTIVAL OF ARTS 2006 present ROBOTIC ARTS, INFLATABLE AESTHETICISM.
Robotic Arts, Inflatable Aestheticism is a project comprising exhibition, workshop, and presentations by Chico MaMurtrie, one of the world's leading artists using robotic technologies. Chico MacMurtrie - born in New Mexico and now residing in New York - is the Artistic Director of Amorphic Robot Works (ARW). Formed in 1992, ARW is a group of artists, engineers and technicians working together to create robotic performances and installations. Chico MacMurtrie describes his vision, "The work is an ongoing endeavor to uncover the primacy of movement and sound. Each machine is inspired or influenced, both, by modern society, and what I physically experience and sense. The whole of this input informs my ideas and work."
3 MARCH-8 APRIL INFLATABLE BODIES A NEW GENERATION OF ROBOTIC SCULPTURE: Since its establishment, Amorphic Robot Works has investigated the nature of movement via the creation of multi-faceted machine-sculptures that play and interact in their uniquely designed environments. This artistic program has resulted in the creation of more than 250 mechanical sculptures of varying size, that assume anthropomorphic and abstract forms. One of the hallmark aspects of ARW's work is its direct contact with the audience. Rarely does the work appear on stage, behind ropes or behind glass. As the audience for the work has grown in size, so has the work itself.
To make performances visible, and yet still physically accessible to the audience, the work has been erected and performed upon integrated sculptures of increasing height. A new generation of permanent work from ARW will completely eliminate the metal and thus, the physically and logistically cumbersome weight. Humanoid forms will arise from high-tensile inflatable fabric skeletons, formless until air inflates the bones. The possibilities for range and kind of movement, are as broad as that for muscle and bone, but with little of the mass. This new generation of interactive performance sculpture from ARW allows not only for a broader exhibition base, but an interesting new robotic form that does not conform to the standard conception of either 'robotic' forms or the existing understanding of kinetic sculpture.
3-19 March: 10-5 Daily; 21 March-8 April: 11-5 Tues-Fri; 2-5 Sat; Admission Free
24 FEBRUARY-7 MARCH WORKSHOP THE CREATION OF ROBOTIC ARTS
The Robotic Arts Workshop will serve as a practical and theoretical platform for the creation of new generations of robotic sculpture and installation, developed by Chico MacMurtrie and Amorphic Robot Works. The event will urge the creation of robotic arts by Australian artists, as it provides hands-on exploration of robotic technologies. Drawn from national registration, artists and robot makers will assist in the building of the robotic structures for the exhibition.
24 February-7 March. Registrations closed.
6 & 7 MARCH FREE PRESENTATIONS
4-6pm Monday 6 March: Public lecture, Mercury Cinema, Lion Arts Centre, 13 Morphett Street, Adelaide
10.30am-12.30pm Tuesday 7 March: Artists' Week panel discussion "Emerging Fields" Elder Hall, University of Adelaide
A comprehensive catalogue with information about the project will be available.
Contact Experimental Art Foundation Director and Curator of ROBOTIC ARTS, INFLATABLE AESTHETICISM: Melentie Pandilovski: +61 (0)8 8311 7505.
EXPERIMENTAL ART FOUNDATION curates its exhibition program to represent new work that expands current debates and ideas in contemporary visual art. The EAF incorporates a gallery space, bookshop and artists studios.
Lion Arts Centre North Terrace at Morphett Street Adelaide * PO Box 8091 Station Arcade South Australia 5000 * Tel: +618 8211 7505 * Fax +618 8211 7323 * eaf[at]eaf.asn.au * Bookshop: eafbooks[at]eaf.asn.au * http://www.eaf.asn.au * Director: Melentie Pandilovski
The Experimental Art Foundation is assisted by the Commonwealth Government through the Australia Council, it arts funding and advisory body and by the South Australian Government through Arts SA. The EAF is also supported through the Visual Arts and Craft Strategy, an initiative of the Australian, State and Territory Governments.
February 22, 2006
Space, Place and Things
New Rules of Tenancy _within_ the Internet of Things
I've been thinking about the relationship between space and networked things as I write this report with Nicolas Nova for our workshop on objects that blog and I've realized (without too much surprise) that that workshop and my thinking about "place" and networked publics are pulling together, particularly in the context of the Internet of Things.
When the place group presented its thinking on the role of place in the context of networked publics, I felt that it was important to consider how a world in which Things will alter the patterns of usage, movement and mobility with space. (I'll capitalize for now to distinguish between networked Internet of Things things and non-networked things, and so as to be succinct I'll do an end-run around Heidegger, Kant and Latour, but not for long Anne!) And I'll call the differentiated kind of movement and rules of occupancy within this different kind of place, motility, so as to emphasize what I think is a safe speculation: a world in which Things that co-occupy physical space are known (by the other occupants of that space) or assumed to have the ability to disseminate, record, and perhaps even put in context what happens in that space and circulate such within the network will change the patterns of use, the kinds of social practice that obtain, and the imaginary about that space. This kind of space and the rules of tenancy are different from space in which such "blogging" characteristics are not assumed about things.
The easiest analogy is to think about how patterns of usage and the "rules of tenancy" for occupying space are altered when that space contains surveillance technologies. (By rules, I mean both the unwritten as well as the more formalized in terms of law, as well as social policy.) The work of the R&D collective The Institute for Applied Autonomy is one of the better examples of really bringing to the fore the way surveillance technologies changes the way we think about, move through, and generally occupy space. Their project iSee takes DIY plotted locations of urban surveillance and, using Google Map-like techniques (way before Google Map-like techniques were formalized), creates new pedestrian paths so as to avoid as much surveillance exposure for those wishing to stay uncharted.
This to me is a great, perhaps even canonical example of the ways in which place, mobility, together with the capacity of networks is impacted. There is something more than just surveillance upon hapless occupants of physical space. In this example, there seems to be an important relationship as well between the Internet and mapping practices as well. The example is a very early one, in Internet years and Internet practices (pre-Google Maps, as I mentioned, and there really is no networked Thing, strictly speaking), but it anticipates in my mind a confluence of networks, Things and differentiated social practice as a result of blending these together.
I am speculating here that the introduction of the "Thing" that is networked in such a way as to circulate within both physical space and networked space will changes the ways in which we occupy space, deserves closer attention by the community of folks working on explicating as well as making this new kind of networked place.
So, what does it all mean? It means that the Internet of Things is less about RFID tags everywhere and more about a different kind of architecture, where boundaries and paths are shaped also by networked Things.
And what are the stakes? Assuming we care about changes in the rules of tenancy of place and are concerned about this kind of architecture, we may want to explicate these new rules so we can think through ways to create more habitable space.
Why do I blog this? Because I am trying to create what I think is an important connect-the-dots game between Internet of Things euphoria, Internet of Things dystopia and a pragmatic set of "design patterns" so that this stuff becomes legible to the "doers" — those who create the worlds in which we will be tenants (most likely the designers, engineers, policy and standards body folks and so on who are the architects and machinists of these worlds.)
Technorati Tags: blogjects, locative media, mobile, motility, networked publics
[blogged by Julian Bleecker on netPublics]
February 21, 2006
Why Pigeons that Blog Matter, or:
The Internet of Things is not an Internet of Arphids
"Abstract: The Internet of Things has evolved into a nascent conceptual framework for understanding how physical objects, once networked and imbued with informatic capabilities, will occupy space and occupy themselves in a world in which things were once quite passive. This paper describes the Internet of Things as more than a world of RFID tags and networked sensors.
Once "Things" are connected to the Internet, they can only but become enrolled as active, worldly participants by knitting together, facilitating and contributing to networks of social exchange and discourse, and rearranging the rules of occupancy and patterns of mobility within the physical world. “Things” in the pervasive Internet, will become first-class citizens with which we will interact and communicate. Things will have to be taken into account as they assume the role of socially relevant actors and strong-willed agents that create social capital and reconfigure the ways in which we live within and move about physical space.
To distinguish the instrumental character of “things” connected to the Internet from “things” participating within the Internet of social networks, I use the neologism “Blogject” — ‘objects that blog.’"
Bruce Sterling. Shaping things. MIT Press, Cambridge, Mass., 2005.
Donna J. Haraway. The companion species manifesto: dogs, people, and significant otherness. Prickly Paradigm, University Presses Marketing, Chicago, Ill., 2003.
Bruno Latour. We have never been modern. Harvard University Press, Cambridge, Mass., 1993.
spimes, spime, things, thing, lift06, ubiquitous computing, design, object, objects, species evolution, rfid, arphid, arphids, pervasive networks, blogject workshop, near-field communication, nfc
Technorati Tags: blogjects, spimes, rfid
[blogged by Julian Bleecker on Research Techkwondo]
February 27, 2006: Paper now available.
February 03, 2006
The Robotic Percussionist
Let's imagine a typical situation: a jam session where talented musicians play all their creativity improvising, listening to each other and reacting to inputs. However let's think about the trio (AKA triumvirate) consisting of Ellington and Mingus with a variation at the drums: not Max Roach but Haile, the robotic percussionist. Defining it as a drum machine would be an understatement, because the prototype developed by Gil Winberg and Scott Driscoll at the Georgia Institute of Technology is an anthropomorphic robot that uses computational power and numerical algorithms to listen to live players, analyze their music in real-time, and play with them in an improvisational manner.
According to Winberg, Haile can generate music that has never been played before, creating a new kind of human-machine interaction able to lead to an innovative sound. This peculiarity would be due to the acoustic and analogue dimension that digital music would start having, reproduced not through speakers but played live - using in this case a Native American Pow-wow drum. The analogue-digital combination emerges also from the wooden texture of the prototype, designed with the collaboration of the College of Architecture, whom the Music Dept. belongs to. The project is part of a research trend, that include experiments like GuitarBot and P.E.A.R.T., aimed to satisfy the intimate desire of many electronic musicians of having an anthropomorphic partner."-Valentina Culatti, neural.
February 02, 2006
Jean Pierre Gauthier at Transmediale 06
Household Objects Revolt!
Jean-Pierre Gauthier's mobile sound installations express the delights of disorder. He takes daily objects out of their context, rearranges them and sets them going in autonomous movement sequences. Most of the works of Gauthier consist of objects which can be identified, but which do not trigger any emotional or cultural associations. A characteristic feature of his installations is the use of household articles and cleaning products, which may be read as a kind of commentary on prevailing social circumstances. The things make themselves independent, declare revolt and bear a symbolic reference, as a closed microsystem, to aspects of commercial exploitation and economic cycles. In his installation Remue-ménage the artist also succeeds in involving the residual waste of the exhibition's architecture. The analog animation movements of the gadgets lend the installation a faintly antiquated character – an altogether unusual, but deliberately chosen form in contrast to the technological development. At Transmediale 06.
January 16, 2006
Ambient Devices, Networked Objects
Have you heard of Ambient Devices? They make networked objects that kind of glow based on stock-market data or weather information. Tobie Kerridge though has created Life Event, a networked object in a league of its own.
Life Event tracks the status of a prisoner on death row in Texas, using the Justice Department website. When the execution of prisoner 990 is reported on the site, the object will turn on. Brine shrimp - or Sea Monkeys - are injected into the salt water, a pump and light source are triggered and the organisms will be fed periodically by the second syringe. A living environment is initiated as a result of the execution. Until that time Life Event is mechanically and biologically dormant. [blogged by Chris on Pixelsumo]
January 12, 2006
Forecasting Umbrella and Magic Mirror
During CES last week Ambient Devices announced a few new items in their line-up. Most exciting is the Forecasting Umbrella, which has a handle that pulses blue light when there's a good chance of rain. The device is connected to Ambient's proprietary network which allows it to know where it is and then download the local weather report. Next time you head out the door just glance at the umbrella and it will let you know if your should bring it along. Brilliant simplicity.
Ambient also announced a new Magic Mirror that looks like a plain mirror until you approach it—time, weather, traffic, stock or other information appear behind the glass and around the perimeter. Lastly, their series of information displays (weather, traffic stocks, etc) will be updated to use eInk for greater legibility. [blogged by Josh Rubin on Cool Hunting]
"Present day computer technology, including PDA’s and cellphones, utilize only a fraction of our sensory and cognitive capabilities.
Ambient devices elegantly embed digital information into the objects and environments that surround us. These displays are in the form of sound, air pressure, motion, light, smell, and other media that complement the full range of our human sensory modalities. They exist in the periphery of our senses, where they provide continuous information without being distracting." more >>
January 11, 2006
Self-Conscious, Reluctant Furniture
The LonelyHome bench--by Tobi Schneidler--is a hybrid creation, part domestic furniture and part robotic pet: a socially intelligent design object. It can be used as an ordinary piece of living room furniture, but it will also come alive unexpectedly. The LonelyHome is a relative of the RemoteHome, the first apartment that exists in two cities at the same time. Originally conceived as a mediating environment, that connects close friends over distance through tangible design elements, the RemoteHome has also produced some cousins that are rather to be used locally.
Unlike the RemoteHome, the LonelyHome is a domestic, robotic furnishing. It can be (ab-) used as a mundane piece of furniture, but it will also come alive at certain times to confront its owner and challenge his presence. The LonelyHome will define a new class of responsive objects and spaces that claim both the utility of a kitchen blender, as well as the social competence of a robotic pet. But unlike the humanoid fantasies of Japanese robot scientists, the Lonely home is a more hybrid proposal, a socially intelligent design object. [via Interactive Architecture]
January 04, 2006
Objects That Blog
The Future of Content?
"The first few days of February I'll be at Lift, a content on the near future of technology, people and communication. Nicolas Nova and others have organized this conference, together with a workshop on Blogjects — a not particularly clever neologism I came up with for objects that blog. This topic ties into the idea of proximity-based interaction and usage scenarios for mobile contexts, the main theme of the NetMagnet research project I'm working on through the Netpublics seminar. An informed speculation I have is that the future of content creation and dissemination won't just come from people. It will also come from the social world of objects — things that have histories and experiences. A different kind of witness upon the world, and a witness to events that are of interest to the other blogging species — people.
Micro local content is one area in which this may be of practical concern. Just this afternoon I had a nice long meeting with Elizabeth Osder at Yahoo Media in Santa Monica. We discussed many things, including how to reward local communities for disseminating news about local sports events. I mean..really local sports events — the little league team scores, for instance. Now, this fits into a larger conversation about the news content ecology, but just taking this particular problem in hand in the context of the Blogject: why don't scoreboards blog? Sure, it's not a question deserving any measure of brilliance for the asking, but it suggests a (super simple) example of the Blogject.
Why are blogging objects interesting? The idea bubbled up as I was reading Bruce Sterling's "Shaping Things". The [w:Spime] — the "thing" in the world that knows itself and is able to tell things around it about itself. RFID is the Paleostine era for Spimes. Blogjects are Spimes that are fluent and legible, so that anyone can read them. Blogjects are meant for humans to read, in human code, not encrypted Arphid data. Blogjects are the prototype framework to experiment, DIY style, with what Spimes can become.
The current, upgraded brain of the Aibo blogs, for instance.
The motivation here is not just to create objects that blog, as we now understand blogging. But to use the framework of the complete blog social formation as one in which objects participate — first-class — in the entire multipath culture circulation network. That means syndication, layering meaning on content, trackback, etc.
There are several Blogject prototype projects on the front burner. One is a Sakura riff called flavonoid, turned around into a U.S. idiolect, focusing on the present day craze with Pedometers. Another is a way to turn device logs into material that's legible to humans. I've already gone on and on about FlightAware, but there are other idioms — for instance, Motion Based, a community-based mobile social software framework that slurps up device track logs and translates them into fitness goals and regimens." [blogged by Julian Bleecker on netPublics]
January 03, 2006
Controlling Music with DIY Interactive Gloves
"Interactive artists and musicians have long experimented with sensor-packed gloves for controlling music, sound, and video. There's Laetitia Sonami, who controls Max/MSP with her Lady's Glove, and many other projects like the Hypersense Complex flex sensor glove-cum-gestural software as seen here this summer. Laetitia's glove is elegantly sculptural, as seen below, and with years of practice performing with it, she's built a whole performance practice around the glove as an instrument..." Continue reading Controlling Music with DIY Interactive Gloves by Peter Kirn, Create Digital Music.
December 15, 2005
The Internet of Things
Increasing "availability," Decreasing "visibility"
"We are standing on the brink of a new ubiquitous computing and communication era, one that will radically transform our corporate, community, and personal spheres. Over a decade ago, the late Mark Weiser developed a seminal vision of future technological ubiquity, one in which the increasing “availability” of processing power would be accompanied by its decreasing “visibility”. As he observed, “the most profound technologies are those that disappear…they weave themselves into the fabric of everyday life until they are indistinguishable from it”. Early forms of ubiquitous information and communication networks are evident in the widespread use of mobile phones: the number of mobile phones worldwide surpassed 2 billion in mid-2005. These little gadgets have become an integral and intimate part of everyday life for many millions of people, even more so than the internet. Today, developments are rapidly under way to take this phenomenon an important step further, by embedding short-range mobile transceivers into a wide array of additional gadgets and everyday items, enabling new forms of communication between people and things, and between things themselves". From The Internet of Things by the International Telecommunications Union. [blogged by Jim_Dowing on Smart Mobs]
December 07, 2005
My Beating Heart
Heartbeats in Sync
My Beating Heart--by Yury Gitman--is a soft huggable heart with a soothing heartbeat you can really feel. When hugging the heart, the tactile heartbeat reminds you of holding a pet or a loved-one. Hold the heart a moment and you'll begin to sense your own heartbeat slowly syncing with My Beating Heart's carefully designed rhythm. My Beating Heart is designed to help you relax, daydream, meditate, and nap.
Each "Beating Heart" is handmade in New York and is of museum quality. The use of only the highest quality materials makes for an experience that will last for years. Select a style that looks most cuddly to you.
December 02, 2005
Virtual Air Guitar
Air guitarists’ rock dreams come true
"Aspiring rock gods can at last create their own guitar solos - without ever having to pick up a real instrument, thanks to a group of Finnish computer science students. The Virtual Air Guitar project, developed at the Helsinki University of Technology, adds genuine electric guitar sounds to the passionately played air guitar.
Using a computer to monitor the hand movements of a "player", the system adds riffs and licks to match frantic mid-air finger work. By responding instantly to a wide variety of gestures it promises to turn even the least musically gifted air guitarist to a virtual fret board virtuoso.
Aki Kanerva, Juha Laitinen and Teemu Mäki-Patola came up with the idea after being invited to develop a virtual instrument as part of their coursework. "The first thing that came to mind was an air guitar," Kanerva told New Scientist. The resulting system consists of a video camera and a computer hooked up to an appropriately loud set of speakers. A player then needs only to don a pair of brightly coloured gloves in order to rock out. Computer vision software automatically keeps track of their hands and detects different gestures, as a video of the system in action demonstrates (22MB, requires Windows Media Player and DivX codec for the visual aspect of the footage)..." Continue reading Air guitarists’ rock dreams come true by Will Knight, NewScientist.com news service, November 2005. [via Eyeteeth]
November 21, 2005
UN predicts 'internet of things'
Ubiquitous Network Connectivity
""Changes brought about by the internet will be dwarfed by those prompted by the networking of everyday objects, says a report by a UN body. The study looks at how the use of electronic tags and sensors could create an "internet of things". The report by the International Telecommunications Union was released at the UN net summit in Tunis.
Thousands of delegates are discussing ways of narrowing the technology gap between rich and poor. "It would seem that science fiction is slowly turning into science fact in an 'Internet of Things' based on ubiquitous network connectivity," said the report. "Today, in the 2000s, we are heading into a new era of ubiquity, where the 'users' of the internet will be counted in billions and where humans may become the minority as generators and receivers of traffic."" Continue reading UN predicts 'internet of things' by Elizabeth Biddlecombe, BBC News. [via julianbleecker on eyebeam reblog]
November 17, 2005
Take Heart and flickr colr pickr
Two Visualization Tools
In Take Heart--by Monika Hoinkis--a bowl is filled with water. By taking hold of it by the handles, it becomes literally moved by the rhythm of your heart. Water becomes storage for discrete information and makes it visible. It starts pulsating with your heartbeat. When you touch the object with only one hand, the water stays calm. When holding both handles, the water starts vibrating. Your heartbeat is calculated through the handles' measurement of your skin-resistance and then assigned to the water's pulsation. The heartbeat is now stored and the water will 'beat' with this rhythm until the pulsation slowly abates. At the end the water is calm and discharged again, unless the bowl is touched by a new person to whom whose heartbeat it would set itself to. It then beats with another's heart. [via infosthetics]
an online application showing a circular color chart out of which users can pick a descriptive color category, plus a specific color & brightness. the application then shows an according collection of Flickr color-matched images. see also tagged colors & flickrland visualization for similar conceptual approaches. [krazydad.com] [blogged on infosthetics]
KrazyDad is Jim Bumgardner. He's a programmer/hobbyist with a passion for making cool graphics software, and software toys. All the software presented on his site was written by him, at home, on nights and weekends. Ditto for the website itself. Jim works programming software for the Interactive TV industry. He is also a part time instructor on various subjects, including interactive art, multimedia, various programming languages, chess for kids, lego robotics.
October 18, 2005
on-the-move interaction with everyday objects
Can we merge the virtual world with the physical world? ReachMedia--by Assaf Feldman, Sajid Sadi, Emmanuel Munguia Tapia--is a system for seamlessly providing just-in-time information about everyday objects. It is built around a wireless wristband with an RFID reader and accelerometers. The wristband detects physical objects that the user is interacting with, and retrieves relevant and personalized information via a smart phone. The user can then have a hands and eyes free interaction with the application by using a unique combination of slight gestural input and audio output. See Bracelet Navigates Net, Technology Review News. Download PowerPoint presentation. [via]
October 07, 2005
ASCII Carpets For Sale
networked objects for the home
carpet/s is an internet based project by Lithuanian net artist mi ga that allows you to purchase a personalised carpet made out of ascii. It is based on a php application where internet resources are used as yarn to make ascii cloths. The user's influence is as small as a click of a mouse. All the rest is done by a machine. The program generates textual output based on the time when the user goes to a web site. The time in a form of hh:mm:ss is then used as a keyword for the results taken from a google search engine. The program downloads the contents of a second given result, rejects html tags and white spaces and puts all textual content into a carpet-like form. The produced carpet-like image can be printed out, signed by the artist and then sent to the purchaser.
September 28, 2005
Nabaztag is a newborn bunny, one of a unique species of intelligent, smart objects. I'm 23 cm tall, I wriggle my ears, I sing, I talk and my body lights up and pulsates with hundreds of colours. Thanks to Wi-Fi technology, I'm always connected to the Internet. Oh, and I'll only set you back 95 €.
Set me up in your home or office and I'll be your personal companion. I can tell you about tomorrow's weather, about traffic jams on the ring road, keep you posted about the stock market's roller coaster rides. I can tell you discreetly, just by flashing or moving my ears about. I can also do it with music or talk. Obviously, you're always in control of what news you want to hear. And there's more...
Thanks to me, your friends and family will have a totally new way of keeping in touch: through the web, text messages, their phone or email… plenty of different ways to send you messages, music, MP3 files that I'll read out to you… or sing out, or even dance. Your friends will no longer be confined to the depths of your computer or phone: they'll come alive in your home, in the noble guise of a rabbit.
But that's still not all. This only the start of a new era of smart objects, and you'll have plenty of opportunities to discover and invent new uses - as useful, futile or subtle as you choose.
September 27, 2005
Low Tech Sensors and Actuators
A Low-Tech Conceptual Framework for Complex Interactive Systems
Commissioned by the Foundation for Art and Creative Technology (FACT), we developed a suite of low-tech sensors and actuators using electronic children's toys and gadgets that can be hacked for their constituent parts. In this way, artists and architects can quickly and cheaply develop interactive spaces and objects. The outcome of the project was an instruction manual of sorts, a manifesto for low-tech, a conceptual framework for complex interactive systems. IR toys as proximity sensors, cats as interfaces, torches as power sources, walkie talkies as wireless networks. Have a look at our report [PDF ]
September 26, 2005
The Art, Technology, and Culture Colloquium of UC Berkeley's Center for New Media Presents:
Ephemeral Gumboots: Dancing the Rhythm of Change
Ephemeral Gumboots: Dancing the Rhythm of Change by Cobi van Tonder, Artist and Musician, Johannesburg; *Wed*, 28 Sept, 7:30-9pm: UC Berkeley, 160 Kroeber Hall. All ATC Lectures are free and open to the public.
The history of Gumboot dance illustrates the potential of culture for transforming social aggression. Cobi will present Ephemeral Gumboots, a hybrid media artwork/musical instrument that takes South African Gumboot dance and extends it as an interface into an electronic music-making system. How does the music of Ephemeral Gumboots reflect the media age in South Africa? How has the artist or facilitator, responded (or succumbed) to the politics and hegemony of technology?
For Deleuze and Guattari, music "both simulates space and creates it literally, on the dance floor, in headphones, on the Internet" (Andrew Murphy, 2000). The refrains of dance music provide territories for the body and mind to move and travel. Cobi will reflect on the contemporary impact of technology and her personal experience of it as she presents her project from a socio and political perspective. She will also invite composers and dancers to use her system for further creative exploration.
Cobi van Tonder, aka OTOPLASMA, is a South African composer, producer and performer who specialises in interactive electronic music and other digital media. She has worked with various dance choreographers, video artists and actors. She also produces commercially for cinema, television, radio and mobile media. Cobi van Tonder holds a degree in Music in History and Society ( University Of The Witwatersrand ) ; a National Diploma in Light Music (Technikon Pretoria) and a National Certificate in Musical Theatre (Technikon Pretoria). She was born in Pretoria and grew up in a small town in the North Western countryside of South Africa. During 2005, she is ZERO ONE/IDEO artist-in-residence in the Bay Area.
For more information see >>
ATC Primary Sponsors: UC Berkeley Center for New Media (CNM) and Center for Information Technology in the Interest of Society (CITRIS) .
Additional ATC Sponsors: Office of the Executive Vice Chancellor and Provost, College of Engineering Interdisciplinary Studies Program, Consortium for the Arts, BAM/PFA, and the Townsend Center for the Humanities.
ATC Director: Ken Goldberg
ATC Associate Director: Greg Niemeyer
ATC Assistant: Irene Chien
Curated with ATC Advisory Board
For updated information, please see >>
September 24, 2005
Correlation Between Vision and Sound
At its simplest form SonicWireSculptor is a novel 3D drawing tool and a unique musical instrument, but perhaps most important - its just fun to play with. The project started out as a personal instrument for Pitaru to perform on. During concerts, audience members often inquired whether they could experience the tool first hand. This encouraged Pitaru to transform the software into an immersive public installation. The installation included enhancements to the original work, allowing a wider range of users to intuitively interact with the environment. Gallery visitors would enter a dark room with a surround-sound system, a projection and a unique drawing station. Opening nights for these exhibits would often double as performance and workshop events where the audience and Pitaru explore the tool together. Participants would be encouraged to add their work to a steadily growing collection of beautiful and surprising sonic-sculptures. [Related]
September 22, 2005
Sift, Sort, Stretch, Pull
To see is to choose. To see is to select something and overlook something else. To be seen is to exist. When someone set sights on you, you appear in the other persons world.
Strainings--by MusicalFieldsForever--is about this genesis: to select what you want to keep from that which should be left out. In a physical, personal and cultural sense. Strainings is an interactive sound and light (audiovisual) installation contained in a silo. The many interpretations of the word "strain" embrace all of this: to sift and to sort, as well as to stretch and pull a string. In Strainings the audience decides what should be heard and seen. By pulling strings, they change the mesh of the strainer/ the large web. Light, sounds and images are strained and created.
Strainings consists of 7 light ropes 6 meter long, hanging from the roof and pulled through a web of flexing rubber bands. Interacting with the light ropes changes the structure of the web that cast shadows on to the projected images on the wall. A camera analyses movements in the light ropes and the input is used to change and make variations in the light, the light ropes and the sound. The background sound consists of bell frogs recorded in Baldringe, Sweden, strings and synthesised sounds that change, mix, and filter generatively. The interaction sound consists of trumpet loops with granular synthesis making it possible to play continuously and changing tones when pulling the ropes.
The software is developed in Max/MSP and SoftVNS. [via]
September 14, 2005
Between Instrument and Artwork
"At three separate venues at Ars Electronica 2005 in Linz, Thomas Petersen found examples of hybrids between artworks and artistic instruments, which explore the relations between sound and vision in physical and virtual space. In this article he offers a few perspectives on what we can learn from these artistic interfaces.
Creative Interactivity Required: Certain types of digital artifacts place themselves firmly in the border zone between instruments for artistic production and artworks. They are 'artistic interfaces' in both senses of the concept. On one hand, they constitute useful interfaces for users to create independent artistic expressions. On the other hand, these tools are artworks in their own right, which offer critical perspectives on the concept of the instrument itself and the dynamics of the creative process.
In the field of digital culture the difference between instrument building and art-making can often be quite subtle. In some cases there are not necessarily any structural differences between an interactive artwork and an interactive space for users to produce aesthetic experiences because they can share characteristics as interactivity, interchangeability, transformation, user-controlled graphics and sound etc. Of course many digital pieces are a far cry from resembling tools e.g. genres like hypertext fiction, point-and-click game environments etc. The defining characteristic of the artistic interface is a specific type of interactivity where the common denominator is a large number of reconfigurable elements, the total sum of which create a manipulable space into which the user can project ideas and expressions. This space depends totally on direct creative interaction from the user.
Each of the following pieces are frameworks of possibilities and limitations. They do not promise more complex or superior results than conventional or analogue instruments, but constitute specific explorations of the production of aesthetic results within certain boundaries. These pieces are critical approaches to the technologies of creative production and can be considered as complex artistic statements in themselves.
Drawing Sculptures of Sound: An immensely popular work at the Cyberarts exhibition at Ars Electronica was Amit Pitaru's Sonic Wire Sculptor, which received an Honorary Mention in the Interactive Art category. This piece attracted quite a crowd in a stuffy room at the O.K Centrum für Gegenwartskunst.
Sonic Wire Sculptor deals with the production of sound by means of three dimensional wire drawings. It explores the correlation between vision and sound as the user literally draws the sound with a pen at a drawing station. The drawings are inserted in a rotating 3d-space as spinning wire sculptures. As this virtual space rotates 360 degrees, the sculpture is interpreted by the computer and transformed into sound according to the initial speeds and positions of the drawings. The vertical axis determines the pitch, notes are indicated by horizontal lines and a metronome can be found at the centre of the stage. The user can save and retrieve the sculptures, stop the rotation and reposition the compositions freely to reconfigure the elements within a new tonal space. The installation of the piece in the exhibition space is based on surround sound so the sound in the physical space reflects the structure of the virtual sculpture.
Compared to many other similar projects the piece distinguishes itself by actually being playable as a performance instrument. It can be used as a precise musical instrument by means of the guidelines to achieve specific notes and precise rhythm. A practical implementation into a joint performance setting is definitely conceivable. The piece is a critical approach to the production and representation of sound at the same time. The wire sculptures are an original counterpart to the conventional note system and a dynamic equivalent to graphic notation systems like the ones used by composers like John Cage, Brian Eno etc.
A Playful Physical Sequencer: A similar project is Golan Levin's Scrapple: A physical active score table, which was presented at the Ars Electronica Center across the Nibelungen bridge. As with Pitaru's Sonic Wire Sculptor, Levin's piece Scrapple deals with the transformation of shapes into sounds. In this case the piece is based on the configuration of physical objects on a table, transforming them into a musical score.
Instead of placing samples in a sequencer or notes on paper, Scrapple adds a physical dimension to computer-generated music. The user has to move and adjust objects to indicate rhythm, tones, melody etc. The visual result quickly ends up as an abstract composition of shapes corresponding to sound. The strength of the piece is not so much being a precise musical instrument but rather a very enjoyable toy appealing to the playful interaction between several users at a time. It also adds a bodily aspect forcing the users to move swiftly around the table.
Other projects by Golan Levin touch upon similar aspects. In The Manual Input Sessions, hand gestures, finger movements and other objects are analyzed by custom software generating graphics and sounds. In Yellowtail, virtual worm-like creatures are drawn by the user on a screen and transformed into sound by a scanning mechanism.
Augmented Table Composition: Going back over the bridge to the 'Interface Culture' exhibition at Kunstuniveristät Linz (Art University of Linz) The reacTable* was exhibited next to a number of experimental interfaces. This particular piece was developed at the Music Technology Group, Pompeu Favra University, Barcelona.
The instrument consists of a round table and plastic objects marked with various symbols. Based on these symbols a camera tracks the position and direction of the objects when they are placed on the table. When one of the objects is placed, the particular symbol is recognized and the immediate environment is augmented with graphics indicating the functions of the element. Some of the objects produce sound and other objects modify these sounds. Some events can also affect the general visual appearance of the table. The total sum of events creates a dynamic, tangible interface with virtually unlimited possibilities in the creation and visualization of sound. The visual result is a stunning abstract animation generating buzzing, clicking and humming compositions.
The interface allows the composer to handle physical objects to generate digital sound supported by a full range of dynamic visualizations. As opposed to the widespread use of e.g. the laptop as a performance instrument, the strength of reacTable* is its tangibility and involvement of the user's body to manipulate digital sound. reacTable* has a social and telematic dimension as there is the possibility of involving distant participants at a shared table. The piece was used in a collaborative performance between Linz and the ICMC conference in Barcelona Spain. The two tables shared musical components and the composers could affect each other' s components in a shared performance situation.
Basic Research: The range of strategies shows us that the visualization/notation of sound is not an unequivocal project. The correlation between sound and vision is a magnet for pioneering instrument builders, simply because this space can never be definitively charted. Each instrument constitutes an interpretation of the ephemeral nature of sound and a longing to map it. The result is an abundance of complex analyses of aesthetic transformations and some very interesting artistic interfaces.
The sheer amount of different strategies of this kind points to the fact that the artistic instrument is under full-scale development these days. These experimental instruments can be regarded as basic research into alternative ways of creating and visualizing music. Because they take place as an experimental practice between art-making and instrument building they can establish interactive spaces which may not have the potential to survive on a commercial market. Certain aspects of these instruments will most likely find their way into commercial production and some of them will maybe become commonplace performance tools - some will not. Whatever their future destinies may be, they are nonetheless important as singular statements and they demonstrate the importance of experimental spaces to develop and test new approaches to artistic tools." [originally published by Thomas Petersen, artificial.dk]
- Online version of Amit Pitaru's Sonic Wire Sculptor: http://www.pitaru.com/sonicWireSculptor/
- Amit Pitaru's website: http://www.pitaru.com
- Golan Levin's website: http://www.flong.com
- Yellowtail by Golan Levin: http://www.flong.com/yellowtail/index.html
- The Manual Input Sessions by Golan Levin: http://tmema.org/mis/
- reacTable*: http://www.iua.upf.es/mtg/reacTable/
- A list of tangible musical interfaces: http://www.iua.upf.es/mtg/reacTable/MusicTables/
- Ars Electronica website: http://www.aec.at
Tread Softly or Else
Dustbunnies, by Stijn Schiffeleers and Hendrik Leper from Boutique Vizique, is a colony of seven shapes that contain two kinds of sensors, a RF-communication system, two microcontrollers, a big battery, speakers and a motor. When there's no human around, they collect dust, hair, flakes of skin and dirt and murmur to each other in a mysterious language. But if you walk into their territory, they become quiet and pretend to be dead.
The only way to observe them in that first state is to be motionless yourself. Touching a dustbunny will cause different reactions. If they like the way you treat them they will make an amusing sound according to the movement you cause. But the Dustbunnies can also become angry. If you mistreat one of them, all dustbunnies will start screaming. The whole group will show their dislike.
September 13, 2005
0kn0.org is a new interdisciplinary media center for art and technology in Brussels. 0kn0 supports the research, development, creation and presentation of new forms of innovative cultural production, and is dedicated to exposing broad and diverse audiences to new technologies and media arts through a programme of concerts, interactive installations, performances, workshops and lectures.
OKNO presents BO-KU-SHI-N-GU BA-GU [Electronic Boxing Bag Melody Making Machine], an interactive installation by SWUMMOQ.NET [Heerko van der Kooij/nl and Niels Wolf/de in collaboration with Travis Robertson/ca]. Punching the Bokushingu Bagu the visitor generates and composes melodies from a selection of various sounds depending on the power and area of the punch. The visitor applies great effort only to be hindered by the outcome of his intensity.
Opening performance by swummoq.net: friday 16 september 8pm-10pm or try it yourself on: Saturday 17 September 2pm-6pm; Sunday 18 September 2pm-6pm; Entrance free.
radio_okno is online now! click: http://qt.okno.be:8000/mix.m3u to open the livestream in your mediaplayer.
Masterclass on Synthetic Speech and Machinic Sound Poetry. Participants can still apply. The masterclass is scheduled from October 17-21 and is part of the x-med-k-2005/series, commissioned by the VAF [Vlaams Audiovisueel Fonds].
OKNO: koolmijnenkaai 30/34 quai aux charbonnages | brussels 1080 | belgium
tram 18 [walvis] | Metro Graaf van Vlaanderen - Comte de Flandres
okno is supported by the Ministry of the Flemisch Community and the VGC.
September 06, 2005
Graspable Surrogates in Multiple Point-of-View Interactive Narratives
Today's story creators have the opportunity to bring digital stories back into our physical environment. Using wireless and tag-sensing technologies, interactive narratives can make use of rich, haptic interactions with physical objects, combining them with the flexibility of digital multimedia content processing and delivery.
The Tangible Viewpoints project explores how physical objects and augmented surfaces can be used as tangible embodiments of different character perspectives in a multiple point-of-view interactive narrative. These graspable surrogates provide a direct mode of navigation to the story world, helping to bridge the gap between the realms of bits and atoms within the field of multimedia storytelling.
Using a system for interacting with a character driven-narrative, the different segments of a multiple point-of-view story are organized according to the character viewpoint they represent, as well as their place in the overall narrative. These segments can consist of various types of media (video, audio, images, and text), and can present character development, action, and location with as much complexity as any scene of a film or chapter of a book.
Inspired by primitive life, Public Anemone is a robotic creature with an organic appearance and natural quality of movement. By day, Public Anemone is awake and interacts with the waterfall, pond, and other aspects of its surroundings. It interacts with the audience by orienting to their movements using a stereo machine vision system. But if you get too close, it recoils like a rattlesnake.
The anemone starts each day's cycle with a high confidence level and a desire to complete various tasks (such as watering the nearby plants, drinking from the pond, or bathing in the waterfall). It chooses to either continue its tasks or interact with a participant based on their behavior and how much progress it has made with its tasks. If a participant wins the robot's attention, it will respond by orienting toward that person and following their movement.
If a participant gets too close or makes a threatening movement, the anemone may become frightened and recoil from the crowd until it regains its confidence. See Movie (.mov file, 5770 KB). Because the robot makes decisions based on its internal drives and audience interaction, each day is different from the next but follows a coherent theme.
Public Anemone is a collaboration between the Robotic Life and Synthetic Characters research groups at the Media Lab as well as MIT's Artificial Intelligence Lab.
Regine Blogs Ars
TEMPEST is based on the surveillance technology known as Van Eck Phreaking - computer screen content can be reconstructed remotely by picking up the emitted EM-field of the screen. TEMPEST utilizes this technique to transform purely generative graphic into a composition of noise which again is fed back into the image generating process. Several AM receivers are tuned into different frequencies of a screen and plugged into an audio mixer for further sound processing. The graphics on the screen become a means of producing sound and it is only the graphics which determine the different timbres and rhythms. By Erich Berger.
Interface Culture at the Linz University of Art was founded last year by Christa Sommerer and Laurent Mignonneau. The programme deals with human-machine interaction to develop innovative interfaces. Went to see their works yesterday.
SoundToy, by Christina Heidecker, Harald Moser and Timm Oliver Wilks, is a 3D environment you navigate as if you were a racing car driver. During the ride you use the steering wheel to create and compose 3D sounds. You place in the space sound objects assigned to electronic beats. The speed, pitch and volume can be individually adjusted using the steering wheel and the accelerating pedal. The composition is generated by the movement and position of the sound objects with respect to one another but also by the route you select.
Recipe Table, by Istvan Lorincz, Hanna Perner-Wilson, Thomas Wagner and Andreas Zingerle, is an interactive workplace built into a kitchen countertop that enable users to intuitively search for recipes. You place the tins and bottle, vegetable and other ingredients and in return the system makes you recipe suggestions. These culinary suggestions are also depicted graphically as finished dishes on the workplace.
Blow, by Taife Smetschka, is a breath-controlled video installation. There's a microphone and a projection of a clip from Billy Wilder’s film *The Seven Year Itch*, the scene in which Marilyn Monroe stands on the grate above the subway ventilation shaft. At first she is stationary, smiling at viewers from the screen. She doesn’t begin moving until she feels a cool breeze. In *blow!* the breeze has to be provided by the installation visitors who must blow as hard as they can into the microphone. Marilyn’s skirt flutters in the breeze as long as the visitor blows into the microphone.
Mika Satomi's Gutsie is a cyber android filled with “guts.” Peeping into its interior through its eye-like hole, you can observe its intestines in motion. It will show you the places you want to see by tracking your eye gaze, but at the same time, your gaze may infect it. The interior of our body is something very private, often disgusting, and thus prohibited to be seen or to be shown. In media, visual images of our insides are often used to induce feelings of violence or disgust. Ironically, this is something that is stuffed inside everyone’s body without exception.
The G-Player (Global player), by German artist Jens Brand, works like a CD-player. But instead of playing CDs, it plays the globe. The device knows the postion of more than a thousand satellites and enables you, by the use if a virtual 3D planetary model, to listen to an imaginary trace of a selected flying object. Like a needle on a record, the satellite follows the Earth's surface. The G-Player transforms the different elevations units course directly into sound. The simple display shows the selected satellite's name, type, altitude and position over the planet (thus the latitude and longitude). Topographic data are interpreted as audio data. "Noise sounds" result from the high density of the data. Pictures.
September 01, 2005
A Physical Score Table
In Scrapple, object placed on the table become sound-producing elements that determine the score's rhythm and pitch. They are interpreted as sound-producing marks in an active score: flexible shape-holding curves allow for the creation of melodies, a group of small toys and other moving objects yield ever-changing rhythms. Video projections on the table turn the installation into a simple augmented reality, in which the objects are elaborated through colorful and explanatory graphics. [blogged by Regine on we-make-money-not]
August 30, 2005
TeleSon: Invention#8 by Chris Brown
TeleSon: Invention#8 is composed for four musicians playing two networked "reacTables", one each in Barcelona and in Linz, Austria (ICMC2005). The reacTable is a new electronic music instrument developed at the Music Technology Group in Barcelona by a research team headed by Sergi Jorda. Brown explains: 'I have interfaced my own synthesis engine, written in SuperCollider3, to the reacTable's visual and gestural interface, thus composing my own synthesis environment with its own rhythmic and tonal character. Like the other works in my "Inventions" series (recorded on Pogus CD "Talking Drum"), this piece involves polyrhythms, computer networks, and improvisation. Like "Invention #5" (1999), it also involves musicians collaborating at great physical distance from each other'. See Sounding the Net: Interview with Chris Brown by Peter Traub.
Chris Brown, composer, pianist, and electronic musician, creates music for acoustic instruments with interactive electronics, for computer networks, and for improvising ensembles. Recent recordings of his music are available on Tzadik, "ROGUE WAVE" (2005) and "LAVA" (1992); and on Pogus, "TALKING DRUM" (2001). He has also performed and recorded extensively with "The Hub", "Room", and the "Glenn Spearman Double Trio". As a pianist he has recorded music by Henry Cowell, Luc Ferrari, and John Coltrane; as an electronic musician, he has collaborated with John Zorn and Wadada Leo Smith. He teaches electronic music at Mills College in Oakland, where he is Co-Director of the Center for Contemporary Music (CCM).
Hard Drive Orchestra
Radio Taking Place Between the Ears and Behind the Eyes
Hard Drive Orchestra--by Fundamental Radio (August Black and Markus Seidl)--is a sculpture and instrument that can be programmed to run autonomously as a stand-alone installation or played live and intuitively in a performance taking control commands via OSC, midi, or some other input protocol.
The hardware component consists of a mobile computer containing 8-16 IDE and SCSI hard drives of various ages of computing history and therefore of various sonic properties. The mechanical sounds of the drives are recorded live with 8-16 separate contact microphones, cleaned by a noisegate, and amplified over loudspeakers.
The software component consists of two objects written in C for PureData. One object performs very specific read-write operations on the various hard disks in order to force the disk's head to move from the innermost part of a drive's platter to the outermost part and back again. This results in an audible grinding click. The second object reads the data being written on each disk and blits it to the video screen. Essentially, the first object allows one to control the sonic activity of each drive while the second object simply displays the data to screen.
With the Hard Drive Orchestra, Fundamental Radio can analyze and present the physical, mechanical, and most importantly, the aesthetic properties of stored information. The HDO has premiered with "The Physical Mechanism of Forgetting" at the Freud Museum in St. Petersburg [Bolshoi Prospekt, Petrogradskaya Storona, 18a] as part of Traumlandshaften festival from October 7-10, 2003. There, the successive versions of the ****soft operating system have been deleted, slowly, over a week cumulating into a live performance of found and submitted data.
August 19, 2005
I-F-E-A-R (infrasound fear emotion audio reverb) is a project by Jodie Hancock, part of the CoEDD graduate exhibition. "This project is an exploration of emotion as a physical force and how that force can define an environment, often with more intensity than is defined by the senses. Fear is a dark and physical emotion which lends itself perfectly to conscious spatial composition."
Jo is using infrasound to see if specific emotions can be triggered. Just like walking down a dark corridor in a game can envoke fear, an installation was built for the exhibition which consists of a corridor. Outside the game Doom 3 is played by another participant. The two event are linked, as the player is controlling the infrasound played to the person in the dark corridor. As Doom is played, the video signal is sent to Max/MSP. The level of light as you walk around the game is monitored by Jitter video. Each time the player shoots in the game, the frequency of the infrasound is changed based on the darkness of the game, creating a linked experience. project website / development blog. [blogged by Chris on pixelsumo]
Toward a Global "Internet of Things"
Evolution or Revolution?
"...Most industry observers agree that the next wave of growth in the technology sector will be outside the realm of the traditional PC desktop. But many think of that growth in mundane terms -- with such things as cell phones and PDAs. In reality, we stand on the verge of an era that will see previously unimagined networked devices and objects. There are already some exciting applications that span the technology spectrum. NASCAR.com offers a Java applet-driven virtual dashboard that displays real-time wireless telemetry from cars during a race -- registering location, speed, RPM, braking, and more. The race cars have effectively become networked devices. Meanwhile, NASA, with the assistance of GE Medical, is now able to monitor real-time medical data (such as blood pressure, respiration, and heart rate) from its Space Shuttle astronauts. Even trees are on the network. Fifty battery powered "micromote" sensors now dangle from UC Berkeley's Mather Redwood Grove, part of a pilot program to monitor the health of groves of redwoods. The sensors register light, moisture, and temperature, enabling scientists to continuously monitor the microclimates surrounding given trees.
Such diverse networked "devices" offer concrete evidence of Metcalfe's Law. Robert Metcalfe, one of the developers of Ethernet, formulated that the usefulness of a network increases by the square of the number of nodes (users or devices) connected to the network. (Imagine how useful the telephone system would be if there were only two phones in the world!)
"In the future, everything of value will be on the network in one form of another," says John Fowler, Software CTO of Sun Microsystems. "And once they're on the network, we can aggregate data from those diverse devices, and then deliver that data to equally diverse devices -- in informative and compelling ways." And that's exactly what we're seeing today. "Most people think of a PC or a PDA as things connected to the network," continues Fowler. "But here we are connecting trees, race cars, and astronauts to the network. It's going to become a much more seamless spectrum."..." From Toward a Global "Internet of Things" by Steve Meloan, November 11, 2003.
August 08, 2005
Populating Physical Spaces with Digital Sound Effects
Heaven Seed is a smart plastic ball that senses its movement and generates various sound effects in real time thereby enhances the game people play with it. The motion data sensed by the ball are wirelessly transmitted to a host computer to provide rich auditory experiences. For example, if you throw it upward, it makes a whiz sound; when you catch it, it makes a flopping sound. If it's left unused for a while, it starts snoring.
Ryota Kuwakubo, who created this artifact, says that he wanted to evoke in users the kind of feeling kids would have while playing with a miniature car – kids may make the sound of speeding with their mouths and they could get totally immersed in the play. Kuwakubo also suggests a possibility of making everyday life more "interesting" by populating our physical spaces with various digital sound effects. When you open the door, when you start a washing machine, etc. If the world around us will be the interface to the digital information spaces, Heaven Seed suggests unique possibilities of enhancing our embodied interactive experiences with creative sound design. Related websites: vector::scan; MyCom PCWEB. [blogged by manekineko on we-make-money-not]
August 04, 2005
Haptic Magical Mystery Tour
The Hyperfabric fabric-based interface lets you reach beyond the screen. This "touchable" touchscreen, made out of an elastic latex-like fabric warps like rubber, and can sense how hard your press it, where you press it, even when several people use it at once. It feels like you are going "through" the screen.
You can press, grab, twist, punch and play with the screen. It can even support your full bodyweight. The Hyperfabric screen is specially designed to communicate with a computer to generate interactive computer graphics, in realtime.
Adam Montandon and the team at HMC MediaLab created a 7 feet high Hyperfabric installation for the Artytechs parlour this summer at the Port Eliot literature festival. Taking place in a hidden dungeon underneath the stately home, visitors had the chance to come face to face with the magical and mysterious interactions. [blogged by Regine on we-make-money-not]
August 01, 2005
Exploring Ways of Interacting with the Computer
HyperSense Complex--the collaborative trio of Alistair Riddell, Somaya Langley, and Simon Burton--have been working together since 2002. They've been exploring the notions of collaborative interactive performance in a networked computing space (essentially where members of the trio - dependant on configurations of specific performances - can influence each other's sound, for example: by sampling each other's output, adding effects etc). HyperSense has also been developing an alternative HCI's using flex sensors worn on the hands (and strapped into a jacket arrangement). The project utilises data sent from the sensors through microcontrollers and passed via USB to some python scripting which handles what the sensor data should do. They then implemented OSC and are using this to send information to SuperCollider running on a separate laptop, which handles the samples and audiobased materials.
July 26, 2005
You wake up in the morning and sit at your breakfast table. There comes a can of Earl Grey, magically sliding towards you. But, who's suggesting to have a cup of tea? A group of students at Tokyo Institute of Technology created an installation called "Kobito: Virtual Brownies," which allows us to see and feel who it is: kobitos (dwarves).
The system was developed using Computer Vision, Artificial Intelligence and Haptic Feedback technologies. The dwarves appear in a movable display device called Kobito Window (see below) that blends video images and virtual 3D avatars of dwarves. Kobito Window is like a magic lens through which people can view the world differently from the way our naked eyes can see: the device recognizes its orientation using a rotary encoder and displays 3D avatars based on a current view angle. Moreover, you can feel the dwarves through haptic feedback. If you hold the tea can, you can feel the force of the dwarves. If you push the can against the dwarves, they will react to it.
This mpeg movie shows the overview of the system with English captions.
The camera mounted on the ceiling tracks the position and the orientation of the tea can. Based on this tracking technology, the software application determines how 3D avatars act. The system presents the 3D avatars visually (in Kobito Window) and physically (on the table as movements of the tea can). The tea can is controlled by a mechanical component hidden under the table (see the above photo). This component, whose position can be programmatically controlled, integrates a magnet for moving the metal can without physical contact.
People who created this installation think systems like this "can be used in the fields of design, amusement, and healthcare."
Related Website: Robot Technology Research Forum at Tokyo Institute of Technology (in Japanese) [blogged by manekineko at we-make-money-not]
July 22, 2005
Tangible Acoustic Interfaces for Computer-Human Interaction (Tai-Chi) explores how physical objects, augmented surfaces and spaces can be transformed into tangible-acoustic embodiments of natural seamless unrestricted interfaces. The ultimate goal is to design Tangible Acoustic Interfaces (TAI) that employ physical objects and space as media to bridge the gap between the virtual and physical worlds and to make information accessible through large size touchable objects as well as through ambient media.
The method that will be developed is based on the principle that interacting with a physical object modifies its surface acoustic patterns, due for instance to the generation of acoustic vibrations (passive method) or the absorption of acoustic energy (active method) at the points of contact. By visualising and characterising such acoustic patterns, it will be possible to transform almost anything (for example, a wall, window, table top or arbitrary 3D object) into an interactive interface (a giant flat or 3D touch screen), opening up new modes of computer-user interaction for responsive environments.
Because of their numerous advantages over other methods, including the spatial freedom they provide to the user, the robustness with which they can be constructed and the ease of accommodating multiple users simultaneously, acoustics-based interfaces will become a major sensing paradigm in the future, implying enormous potential for the whole computer and information industry. [via]
July 19, 2005
Maywa Denki in Paris
NAKI, TSUKUBA, and EDELWEISS
Maywa Denki is performing at Quartier d'ete Festival in Paris on July 21,22 and 23. This art unit, which is produced by Nobumichi Tosa, has been inventing a number of hard-to-describe electric devices by transforming the history and culture of small electrics manufacturers in Japan's high-growth period (about 40 years ago) into three product lines of interactive art objects: NAKI, TSUKUBA, and EDELWEISS.
They've "manufactured" various kinds of musical instruments including Pachimoku, which Nobumichi Tosa wears in the photo, and Koi-Beat, which could be interpreted as "love beat" as well as "carp beat."
Pachimoku is operated by finger snapping motion, which is sensed and transmitted to beater bars of the two Mokugyo's (wodden drums used in Zen rituals) at the top, through rather thick wires, which would withstand high voltage (e.g., AC 100 Volts, commonly used for home appliances in Japan).
Koi-beat (see this page) is a carp-shaped rhythm machine with 16x4 monotonous square electric switches and it generates groovy rhythms at 100 Volts.
They play music with the various instruments they've invented – You'll see if you get a chance to go to that event in Paris. For the rest of us, hopefully they'll record the event and turn it into another "manufactured product" of theirs (in short, a CD or a DVD). [blogged by manekineko on we-make-money-not]
July 15, 2005
Capturing, Sharing and Reflecting Personal Histories
moo-pong--developed by researchers at Okude Lab, Keio University--is a technology that allows people to capture, share and view video images using Tangible User Interface (TUI). When a camera captures video images, they are associated with physical tokens using RFID technology. "Users can edit and browse among moving images by dropping moo-balls into the moo-scope. Mirrors in the moo-scope produce visual effects like a kaleidoscope."
The idea of connecting video/audio with physical objects is not new. For example, Durell Bishop's Marble Answering Machine represents incoming voice messages as marbles. Ullmer's mediaBlocks use small wooden blocks "for the containment, transport, and control of online media" However, moo-pong's goal is more ambitious than any classic TUI research projects: it is designed to support people to capture, share and reflect on their captured life histories.
At this year's RFID Journal Live, Paul Saffo, research director for the Institute for the Future said "You are in the early stages of helping build a real, new kind of media revolution." moo-pong can be viewed as an example for thinking about such media revolution in relation to RFID Technology. via SIGGRAPH 2005 Website. [blogged by manekineko on we-make-money-not]
July 01, 2005
The Lonely Home Bench
The LonelyHome bench, by Tobi Schneidler and m.a.o.works, is a "socially intelligent design object", part domestic furniture and part robotic pet. You can use it as a normal bench but it will come alive unexpectely to confront you and challenge your presence. It might, for example, try to throw you off when you sit down and moves when you stroke it.
The LonelyHome is a relative of the The RemoteHome, an apartment that exists in two cities at the same time and connects close friends over distance through tangible design elements. Video. The LonelyHome bench is at Victoria & Albert Museum, London, until August 29th 2005, as part of the Touch Me exhibition. [blogged by Regine on we-make-money-not]
June 21, 2005
Mechanical Orchestra with you as Conductor
In Sale Away, passers-by can conduct an "orchestra" of household devices via their mobile phones on a display window. The mechanical orchestra consists of flute, organ and brass playing vacuum cleaners, rattling kitchen mixers, buzzing ventilators, radio playing toy trains, wobbling jigsaws, dancing tumble dryers, humming refrigerators and other misused household utilities.
The conductor is a big refrigerator. This fridge is also the explanatory interface. To start the orchestra and wake up the shopping windows you have to dial the number and follow the commands displayed on the window. This call opens the door of the fridge, giving free the image of its explanatory interface, the "mobile phone robot person". The robot will explain and invite you to act. By pressing keys on the phone you can let all different instruments play along with the melody. You can set some single voices or the whole orchestra tutti.
June 17, 2005
A Wearable Sculpture
The Head is a wearable sculpture with a connection to the internet and a public access via sms. The sculpture contains "an eye" (a camera lens) and a camera phone which captures images and sounds in certain intervals. People can send the Head sculpture an SMS to which the sculpture responds by capturing an image and recording a short soundfile. This image and the sound are sent back to the SMS-sender. They are also uploaded to a website with date and time of the capture accompanied with the sent sms-message.
The Head will be adopted and carried around by people and its "vision and hearing" is both automated and triggered by public to collect memories on the way. It will be also set (by the artist) to be adopted by people in professions which generally have "a view" on the society: police officer, politician, tourist guide, teacher, etc. Author: Laura Beloff (one of the developers of the Seven Mile Boots.) [blogged by Regine on near near future]