July 11, 2007
Avatar News: The Virtual Teacher +
Future avatars will be adept at manipulating human response
"Researchers from Illinois and Florida are developing a networking system which will create virtual representations of real people to improve our knowledge. They will use artificial intelligence and natural language processing software to enable us to interact with these avatars. The goal of the project, sponsored by the National Science Foundation (NSF), is to give us the possibility to interact with these virtual representations as if they were the actual person, complete with the ability to understand and answer questions. We should see the results at the beginning of 2008 — if the researchers succeed." [via]
"The current technology for creating computerized avatars for human interactions is relatively primitive; we tend to be surprised if a computerized representation can perform even vaguely human behaviors. But in the latest issue of Science, Judith Donath of MIT's Media Lab argues that this situation is likely to be temporary. She suggests that as programmers respond to the demand for more realistic human behavior in avatars, they will necessarily create the technology to manipulate human trust via the results.
Donath notes that even seemingly simple human behaviors are accompanied by collections of body language and expressions that can reinforce or undercut the messages we intend to send. For instance, she suggests that we signal our intention to engage someone in conversation by a complex suite of gestures, mostly nonverbal: "You carry out this goal not only by walking across the room but also by making eye contact, smiling, raising your brows, adjusting your clothes—a complex set of communicative behaviors that indicate your intention to start a conversation, allow you to gauge his willingness to do so, and show your level of determination."
Right now, even the most sophisticated avatars accomplish only a small subset of these behavioral collections. But that's beginning to change, at least within the research community. Although putting all of the components of these behaviors under user control is viewed as too complex, Donath cites work in which entire suites of behavior could be controlled by a single command. For example, an avatar commanded to end a conversation can nod its head, wave, and break eye contact. Users of such systems found them natural and more engaging, and they found their conversation partners to be more expressive.
Although these sorts of advances may make for a more appealing virtual experience, Donath suggests that they have some disturbing implications for issues of trust and credibility when future avatars are used for communication. She notes that we interpret many behavioral collections in light of what they tell us about the person who is doing the talking. For example, we tend to view someone who doesn't make eye contact as more likely to be lying, providing uncertain information, or simply uninterested in talking with us. A well-programmed avatar can be commanded to engage in behaviors that simulate honesty, regardless of whether the speaker is trustworthy.
Research is also revealing that other factors play into an avatar's trustworthiness and credibility. For example, simply making an avatar appear more human (including providing it with a clear gender) caused them to be rated more trustworthy. Other research has shown that trust can also be manipulated via more subtle techniques. Teams of people paid greater attention to an avatar that was created with a "team face," one that combined features from the members of the team. Individuals found political messages more persuasive when they were delivered by an avatar with a subtle resemblance to the listener's own face.
Donath specifically raises the disturbing possibility of, "a world in which you are bombarded with oddly compelling ad campaigns presented by people just like you." But she also suggests that current trends, if they continue, may leave us with avatars tailored to the contexts where they're used. After all, we demand detailed and elaborate avatars for our online fantasy games but will happily accept far less for what's billed as our "Second Life." It's possible that we'll actually wind up demanding less sophisticated avatars to deliver the most critical information in order to avoid the potential for manipulation. But only, of course, if we're aware of the danger." Future avatars will be adept at manipulating human response by John Timmer, Ars Technica. Related: Our avatars, ourselves.
July 09, 2007
Camping in the Gigital Wilderness:
Tents and Flashlights as Interfaces to Virtual Worlds
"ABSTRACT: A projection screen in the shape of a tent provides children with a shared immersive experience of a virtual world based on the metaphor of camping. RFID aerials at its entrances sense tagged children and objects as they enter and leave. Video tracking allows multiple flashlights to be used as pointing devices. The tent is an example of a traversable interface, designed for deployment in public spaces such as museums, galleries and classrooms.
Keywords: Virtual environments, immersive and traversable interfaces, RFID, video tracking, single display groupware.
INTRODUCTION: Pitching a tent and spending the night under canvass, with friends, a rucksack, a flashlight, surrounded by strange shadows and sounds, is an exciting experience for most children. It is also often the closest that many come to the wilderness, a primitive unfamiliar place that is far removed from their everyday world...
THE TENT AS HCI: As an interface, the tent reflects several current concerns within HCI. First, it represents an example of a traversable interface that provides the illusion of crossing into and out of a virtual world. Previous examples have included fabric curtains, sliding doors, hinged screens and even water sprays . The tent demonstrates an alternative in which participants enter a space that is defined by the screen, but where unlike CAVE-style immersive interfaces, the space outside the screen is also part of the experience." From Camping in the Gigital Wilderness: Tents and Flashlights as Interfaces to Virtual Worlds by Jonathan Green, Holger Schnädelbach, Boriana Koleva, Steve Benford, Tony Pridmore, Karen Medina (CHI 2002). [via pasta and vinegar]
MIT Media Lab: Responsive Environment Group
Dual Reality Lab
"Dual reality" is the concept of maintaining two worlds, one virtual and one real, that reflect, influence, and merge into each other by means of deeply embedded sensor/actuator networks. Both the real and virtual components of a dual reality are complete unto themselves, but are enriched by their mutual interaction. The Dual Reality Media Lab is an example of such a dual reality, as enabled the Plug sensor / actuator network that links our actual lab space to a virtual lab space in the Second Life online virtual world. [MOV]
SLIDE 7: Virtual Worlds >> Many attempts, many failures >> Potential to be as revolutionary as the WWW >> Example: Second Life by Linden Lab >> Key attributes: – shared immersive experience – persistent state – market economy – creative medium.
SLIDE 8: Taxonomy of Reality >> Virtual Reality (all simulated) >> Mixed Reality (some real, some simulated) >> Reality (all real) >> Augmented Reality (all real, some simulated) = mono realities. ALL OF THE SLIDES [PDF]
Gadgets may help merge virtual reality with real life
Living Through our Avatars
Gadgets may help merge virtual reality with real life by Mark Baard, Globe Correspondent, July 9, 2007: That hipster you always see talking into his Bluetooth headset might soon be able to use a similar device to leap into Second Life without even stepping out of line at Trader Joe's.
The company behind Second Life, Linden Lab, hopes to introduce hand-held and wearable systems that act as gateways between the real and virtual worlds. Linden Lab and other virtual worlds also are developing versions that run on existing mobile phones.
Researchers at a recent virtual worlds conference at MIT said that special eyewear, display "badges," and speakers worn about the neck will allow us to live more fully through our avatars -- those idealized versions of ourselves that typically boast better proportions than the saggy originals.
Second Lifers wearing the gadgets will be able to attend "in-world" parties and gallery openings, whether they are sucking down beers at Cornwall's or stuck in Fenway traffic. Motion detectors and other sensors in the devices will also show your virtual mates what you are up to in the real world.
It might sound like public safety officers will need to shift focus away from the risks associated with driving while chatting on cellphones to the inherent dangers of operating in two realities at the same time. But conference participants said such concerns are premature.
"It's like you're not going to be allowed to be in a virtual world while driving in the real world," said Robert Sutor, vice president of open source and standards at IBM.
Linden Lab vice president Joe Miller described one of the early products that will bridge the two worlds as a wearable box that creates a "3D sound field" that allows the wearer to hear voices from his virtual world without completely shutting out the real people around him.
The prototype speaker device presented to Linden recently by a developer "is not ready for prime time yet but it's working pretty well," said Miller, speaking at "Virtual Worlds: Where Business, Society, Technology & Policy Converge," sponsored by MIT and IBM.
Linden is encouraging open source developers to create client software for mobile devices. And Blizzard Entertainment, creator of the online multiplayer game World of Warcraft, is hiring developers with experience in Symbian and Adobe Flash Lite for its mobile interface initiative.
Conference participants said cellphones are likely to be the first mobile devices to create two-way connections between real and virtual reality.
"The idea of cell phone as sensor has started to catch on in the sensor network community," Joseph Paradiso, leader of the Responsive Environments Group at the MIT Media Lab, wrote in an e-mail last week. "They're much heavier platforms than usually seen in sensor networks, but they are certainly ubiquitous!"
ResEnv has produced a prototype "tricorder" -- inspired by the information- synthesizing gadget from "Star Trek" -- that gathers data from real-world surroundings and translates that information into virtual desks and chairs.
In a video at the ResEnv website, media.mit.edu/resenv, grad students demonstrate how the tricorder's sensors can detect someone swiveling in a desk chair and typing on a computer keyboard. The device can also show the user what is happening in the virtual space he or she is helping to create.
It will take some retooling before virtual worlds can accommodate all of the data streaming from ubiquitous sensors.
"We're talking with Linden Lab [about creating] more efficient pipes of sensor data into their environment," said Paradiso. "I can certainly stream video, but I can't efficiently input diverse sensor data."
July 06, 2007
Nina Czegledy reports on Media Forum 2007
Between June 25-28, Media Forum, Moscow in collaboration with the Moscow International Film Festival, presented Nudity/Game console - a series of events including a Vito Accoinci retrospective, round table discussions and a video art competition. The theme of this year was: Nudity/Game console. The ERA Foundation hosted the Media Forum events in their centrally located, elegantly renovated gallery space.
In this report I would like to focus on the round table discussions - especially as the majority of the presenters happened to be women working with research & practice in digital fields/communities. Instead of lengthy descriptions, links are provided below for further information.
"Cultural cooperation online", the first discussion on June 26th was presented by Angela Plohman, content developer of Labforculture. The organization provides extremely useful tools for those in the arts who wish to collaborate across borders. The constructive, practical value of this information and knowledge platform was very much appreciated by the audience as attested by the numerous questions and comments.
On June 27, Anne Nigten of V2 lectured on "Research and development in the interdisciplinary field from an art perspective" followed by Dmitry Bulatov on "The third modern - denuding the media. The technobiological art work." Last but not least Margarete Jahrman showed us "Pong Dress" and Ludic Society. All of these presentations were very well received with lively Q&A periods.
Next day, June 28, "Super-Embodiment of Woman Artists in Media Arts" was presented by Irina Aristarkhova, Nina Czegledy and Elena Kovylina. Irina noted in her introduction that "Nudity and the Nude - have become key issues in contemporary art, theory and politics. Women artists face what Foucault called 'hysteriarization of female body', while men artists face an issue of 'absent male body' (Kelly Oliver) and respond to it with various strategies. One might argue that both Western and Eastern European women artists have exhibited 'too much body', and to a certain extent find it difficult to leave "body" behind. However, we rarely discuss what impact socialist gender policies and practices have had on this process within aesthetics. If performance art leaves us with legacy of 'too much body' - 'super-embodiment', - one wonders of it morphs into (new) media art as question of 'machine' / 'cyborg' embodiment and its identity."
In the course of our presentations both Irina and myself emphasized that feminism and gender issues can not be separated from the particular history of the region. Lack of clarification of this issue leads to numerous misconceptions and miscommunication. Case histories of media art were presented including "I am a robot" by Boryana Dragoeva Rossa (Bulgaria) and "Reality Resonance" by Erika Katalina Pasztor (Hungary), followed by the outstanding Russian performance artist Elena Kovylina, showing her "Pick a Girl" video performance featured at the Sydney Biennale 2006. The questions and comments at the end of our panel revealed that controversy and strife are still embedded in this discourse.
The schedule left room for us to visit some artists studios, participate in the mega-retrospective by Oleg Kulik and Vinzavod and old factory converted into a mixed use art center and luxury boutiques - where hopefully the Media Art Lab will have its future home. There is so much more including Art4 the private contemporary art museum, Moscow Rolls Royce, the electroboutique, traffic jams, night life etc. etc -worth a visit!
nina [via Spectre]
July 02, 2007
Treadmill jogging in Second Life
June 29, 2007
Wikipedia in Second life
Feedamass, the “know-it-all” sidekick that fetches information to nearly any device, functions in virtual spaces, Second Life, for instance. A script sends a search query from Second Life chat to Wikipedia, and back in-world as a silent message to the user. It's all contained in a ring worn on your finger ... [via New World Notes]
Are you Living in a Computer Simulation?
ABSTRACT: This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed. Are you Living in a Computer Simulation? by Nick Bostrom, Department of Philosophy, Oxford University.
"Why the Matrix? Why did the machines do it? (Human brains may be many things, but efficient batteries they are not.) How could they justify a world whose inhabitants are systematically deceived about their fundamental reality, ignorant about the reason why they exist, and subject to all the cruelty and suffering that we witness in the world around us?" From Why Make a Matrix? And Why You Might Be In One: by Nick Bostrom.
Also see: Simulism is a concept that deals with the possibility that we are living in a simulation.
Double Skin/Double Mind
Experience a workshop, virtually
The interactive installation version of Double Skin/Double Mind will preview at Beeldmedia Studio, Amsterdam School of the Arts, Jodenbreestraat 3 on June 29 + 30 from 16:00-20:00.
Double Skin / Double Mind is a virtual version of the Double Skin / Double Mind workshop. This workshop, which has been taught by dance company Emio Greco | PC since 1996, represents the basis of the creative work of choreographers Emio Greco and Pieter C. Scholten. Participants in this workshop are challenged to discover new interpretations of their dancing body.
Throughout 2006 an interdisciplinary research group, consisting of dance notators Eliane Mirzabekiantz and Marion Bastien, motion capture researcher Fridiric Bevilacqua, cognitive neuroscientist Corinne Jola, media artist Chris Ziegler, cinematography Maite Bermudez, artistic research processes Scott de Lahunta and EG | PC researcher Bertha Bermudez, has been focusing on this specific workshop trying to analyse and document it.
The different data from each research area have been collected, and interactive graphic visualisation tools and motion capture were re-implemented to create this preview of the Interactive Installation Double Skin / Double Mind. The current installation offers participants the possibility of taking part in a virtual version of the workshop in real time, while receiving verbal, physical and peripheral information. By following a life size moving figure, the participants will recognize, compare and understand their actions and involvement in the practice of this workshop. They will travel through the Double Skin / Double Mind structure in a mental and physical way, experiencing what the different layers are.
Concept and realization: Chris Ziegler (ZKM Karlsruhe), Fridiric Bevilacqua (IRCAM, Paris), Bertha Bermudez (Emio Greco | PC)
Co-production with: Amsterdams School of the Arts, research group Art Practice and development, Marijke Hoogenboom
With thanks to: Maite Bermudez, Katharina Pohlmann, Eliane Mirzabekiantz, Marion Bastien, Corinne Jola, Nederlands Instituut voor Mediakunst / Montevideo, Scott deLahunta, Jeroen Fabius, Paul van der Ploeg, Cinedans and the EG | PC team
Project supported by: Prins Bernhard Cultuurfonds
Emio Greco | PC receives funding from: the Dutch Ministry of Education, Culture and Science and from the Dutch Fund for Amateur Art and Performing Arts.
mobile +49172 89 56 328
June 28, 2007
The $20,000 Question:
Are we witnessing ... the emergence of a global mind?
An emergent property by definition cannot be achieved with a centralized system and the degree of emergence required to achieve a suitably complex evolved culture can only happen in a dynamic environment that is shaped by the populace. This essential quality is built into the very concept of reality, as defined by LL. ‘The thing we concluded is that something is only real if you can change it. If there’s a pixel on the screen in front of you in SL, and you can’t alter it, then why would we put it there?’
Does the name Mitch Kapor sound familiar? If you are interested in the history of SL, the answer may well be yes, because he was one if LL’s earliest investors. “Mitch Kapor was the only person who got it”, said Rosedale in an interview with Inc. Magazine.
Personally, Mitch Kapor first came to my attention through an essay of his, published in 2002 on Kurzweilai.net. As with LL and SL, Kapor was putting money forward in anticipation of a future outcome, but this time the money was riding on a failure, not success. The bet centred on a question: Will the Turing Test be passed by a machine by 2029? Ray Kurzweil said ‘yes’, Kapor said ‘No’ and whoever loses will donate $20,000 to a charity selected by the winner.
In his essay, Kapor explained why he was sceptical of the possibility that a machine will ever pass the test. ‘To pass the test, a computer would have to be able of communicating via this medium (text) at least as competently as a person. There is no restriction on the subject matter…It is such a broad canvas, in my view, that it is impossible to forsee when, or even if, a machine intelligence will be able to paint a picture which can fool a human judge’. Kapor further elaborated on why a computer can never mimic a person, but what struck me as I reread this essay recently was this: Just possibly, SL may prove to be a crucial link in the enabling technologies of human-like intelligence.
What will it take to build a machine that you can chat with as if it were a person? Decades of research into question has yielded three vital requirements. Power, Organization, and Education. The first requirement, power, means building hardware that matches the computational capacity of the human brain. If you have a top-spec PC, then you have at your disposal something with the equivilent brainpower of a fish — a millionfold too weak to do the job of a human brain (which Moravec estimates at 100 million MIPS). Actually, it is not the case that you don’t have access to a ‘computer’ capable of matching the raw power of the brain, especially if you connect to SL. As Rosedale explained to Tim Guest, ‘the combined computational capacity of the aggregate SL grid, running 24 hours a day as it does now, is in excess, by almost any measure, of at least one human brain at this point in time’. Of course, it would be a waste of resources to use the grid simply to simulate ONE human, when it can instead be used to run a virtual world harnessing the creative powers of tens of thousands of real people at any particular moment.
In any case, the second requirement (organization) refutes the possibility of the SL grid ‘waking up’ to self-consciousness. It’s not sufficient to simply match the 100 million MIPS of a human brain, but also to understand how the brain is organized, how it processes information. Thanks to functional brain imaging, we are beginning to understand how this organ differs to a computer, and the field of neuomorphic modelling is focused on building hardware and software that is ‘functionally equivilent’. Currently, brain imaging only hints at the underlying principles of human intelligence, it is not yet capable of following the actual information being transformed in realtime. Also, as mentioned above, we currently lack the raw power needed to model all several hundred regions, at least not on any computing system whose precious resources are not better used in other areas. What we have achieved so far, is to develop highly detailed mathematical models of several dozen of the several hundred types of neurons in the brain. Researchers have connected artificial neurons with real neurons from spiny lobsters, and found that this hybrid biological-nonbiological network performed in the same way, and produced the same type of results as, an all-biological net of neurons. Combining neuron modelling and interconnection data obtained from brain scanning has enabled science to reverse-engineer more than two dozen of the several hundred regions of the brain. Again, in some cases, the neuromorphic models have been connected to real brains. In 2006, researchers built a chip that mimicked a section of rat hippocampus. The real section was removed, the artificial replacement wired in place, and it restored function with 90% accuracy.
Given that brain imaging tools are increasingly improving, and computers are getting more powerful, there is no reason to suppose that we cannot reverse-engineer every neuron, every region, and so build an entire brain. And contemporary examples of hybrid networks make for a curious thought-experiment. What if we were to remove a neuron from Mitch Kapor’s brain, and put in its place its neuromorphic twin? If the artificial neuron sends and receives information just like its biological predecessor did, it seems hard to argue that Kapor’s behaviour would be affected. Now suppose that, step by step, his entire brain is replaced. Remember, we have already partially performed this experiment on rats and retained function with 90% accuracy. Subsequent generations of chips are likely to close the gap and creep towards 100%. So, hypothetically, if we systematically replace Kapor’s brain, ensuring at every step that the hybrid biological/nonbiological net is behaving normally, Kapor would retain the abilities we associate with human intelligence. But if we keep going, ultimately ALL the biological brain will have been replaced. Where once there was a brain there is now an astonishingly complex machine. Equally, instead of replacing a pre-existing organic brain we could just build a neurmorphic model and install it in a robot with appropriate sensors that feed information to it corresponding to sight, touch, smell and taste. Why would this robot- this machine- not be capable of behaving like a real person? We could, after all, replace each part of Kapor with an artificial version; a robotic eye, a robotic limb, a robotic heart, and so on until he is 100% artificial. You could perhaps argue he is no longer ‘human’ (though I defy you to pinpoint the exact point where humanity was lost) but Kapor could argue quite convincingly that he is a person, and deserves to be treated as such. Why would a robot built from the same parts not also be able to argue its case?
You could answer that by asking this: Can a one year old baby pass a Turing Test? The answer is clearly no, because a baby has yet to develop the capabilities associated with human intelligence. To be sure, some functionality is ‘hard-wired’ into our brains from birth, but so many more only develop as the baby spends years interacting with reality. The same thing would apply to our robot. We should not expect to build it, turn it on, and expect it to immediately engage us in conversation about the music of Proust or the price of sprouts. No, we will have to provide the third requirement: Education.
It is this requirement that Kapor is betting will fail. ‘Part of the burden of proof for supporters of intelligent machines is to develop an adequate account of how a computer would acquire the knowledge it would be required to have to pass the test…I assert that the fundamental mode of learning is experiential…most knowledge, especially that having to do with physical, perceptual and emotional experience is not explicit, never written down…the Kurzweil approach to knowledge acquisition (he argued that the AI would educate itself by ‘reading all literature and by absorbing the knowledge contained on millions of websites’) will fail’.
Kapor argues that human beings are embodied creatures, grounded by our physicality and in many ways defined by it. This logically leads to the observation that there is an intimate connection with the environment around us. ‘Perception and interaction with the environment is the equal partner of cognition in shaping experience’, he reasoned. The qualities we associate with human intelligence were shaped by evolution, but for humans there is another form of heredity to consider, along with natural selection of genetic information. That additional form is ‘culture’. Our social networks evolved the rules that define common sense and artistic sensibilities, never written down but nevertheless transmitted from mind to mind. We can therefore identify a crucial step in achieving Turing AI; the construction of an ambitious ‘laboratory’, consisting of an entire environment in which a network of social and cultural relationships can grow almost from the ground up. Of course, we have many such laboratories already, for as Edward Castronova explained, ‘we have real human societies that grow up on their own within computer-generated fantasy worlds’.
There is a pretty sound argument, championing SL above the likes of ‘World of Warcraft’. Yes, WOW has a greater population (though for how much longer is open to question) but it does not have the degree of self-organization we see in SL. It is a mistake to think evolution is only a means of shaping life to fit its environment, because the environment is shaped by the presence of life. Both are in a state of constant change. An emergent property by definition cannot be achieved with a centralized system and the degree of emergence required to achieve a suitably complex evolved culture can only happen in a dynamic environment that is shaped by the populace. This essential quality is built into the very concept of reality, as defined by LL. ‘The thing we concluded is that something is only real if you can change it. If there’s a pixel on the screen in front of you in SL, and you can’t alter it, then why would we put it there?’
This may run counter to many people’s concept of reality. After all, my belief in the objective reality of the sun (for example) is based on the observation that it remains as it is, no matter what my whims may be. But the immutable and the alterable are not so separate as they seem. The fixed laws of the universe are what make creativity possible, because total chaos makes learning an impossibility. The Lindens could demonstrate this. If the behaviour of prims randomly changed each day, to the extent that nothing you learned today was applicable tomorrow, creativity of any meaningful mind would not be at all feasible.
Of course, the Lindens recognise the importance of stability. ‘We are trying to create a close reproduction of the actual, physical world we live in — one that will easily be comprehensible and useful to us because it so closely resembles ours’. If, as Kapor suggests, part of the essential component of human-equivilent AI is to be intimately connected to an environment, our collaborative efforts to build exactly that can reasonably seen as a step in the right direction.
But, why bother building a simulated world when there is a real one ready to go? Why not build physical robots interacting with real people, as opposed to bots conversing with avatars? Well, a virtual world has an advantage in that everything can, in principle, be recorded. Given that the entire world is computer-modelled, it is technically possible to record every movement, gesture, and interaction that takes place. This could be advantageous for scientists wishing to ‘download’ patterns of information ‘never directly expressed’ so that our infant AI can acquire a knowledge of human experience that occurred in the past but was tacit.
Another advantage of growing AI in a computer-modelled world is that it puts both ‘artificial’ and ‘real’ people on more of an equal footing. Indeed, this is a requirement of the Turing Test; prejudging personhood by observing who is the robot and who is the human violates the rules. An avatar controlled by a person you cannot see (or is the avatar under the control of AI?) is more in keeping with the conditions of the test. Another sense in which the playing field is levelled is that both ‘bot’ and ‘avatar’ are in a more basic state of learning about the social rules appropriate to their environment. We are, in a cultural and artistic sense, both ‘children’ learning through trial and error.
But while projects like ‘Neufreistadt’ are fascinating studies in the emergence of governance, it could be argued that such systems require a higher-order internationality beyond the capability of an infant’s mind to model. Worlds like SL develop from a more basic level than the modern society we are born into, but perhaps not quite basic enough to evolve higher-order internationality (a theory of mind, in other words) from scratch. In the sci-fi novel ‘Accelerando’, Charles Stross attributes consciousness to ‘a product of an arms race between predator and prey’. More precisely, a product of a mind’s ability to model behaviour. The hawk runs an internal simulation of its prey’s likely behaviour, calculating the direction it will run when it senses danger. The sparrow, meanwhile, uses its model of the hawk’s mind to calculate its likely attack strategy and execute an effective evasion. Natural selection weeded out the less-effective theories of mind, until for certain genes survival required cooperation among ‘a species of ape that used its theory of mind to facilitate signalling — so the tribe could work collectively — and then reflexively, to simulate the individuals own inner states’.
Stross attributes human-level consciousness to a paring of signalling and introspective simulation. Can a simulated world evolve a theory of mind from the ground up? That is a question being explored by ‘New and Emergent World models Through Individual, Evolutionary and Social learning’ — NEW TIES. The project, which brings together a consortium of researchers in AI, language evolution, agent-based simulation and evolutionary computing, seeks to use grid computing to model an environment inhabited by millions of agents, each one a unique entity with characteristics including gender, life expectancy, fertility, size and metabolism. Sexual reproduction will be possible, with agents able to reproduce and their offspring inheriting a random selection of their parents ‘genes’. Also, by pointing to objects and using randomly generated ‘words’, the project hopes to develop culture, which it defines as ‘knowledge structures shared among agents that reflect aspects of the environment, including other agents’.
In summary, the NEW TIES project states, ‘we will work with virtual grid worlds and will set up environments that are sufficiently complex and demanding that co-operation and communication is necessary to adapt to given tasks. The population’s weaponry to develop advanced skills bottom-up consists of individual learning, evolutionary learning, and social learning (which) enables the society to rapidly develop an understanding of the world collectively. If the learning process stabilizes, the collective must have formed an appropriate world map’.
Such work cannot help but provoke questions about our own existence. Here, we have patterns of information that will (it’s hoped) organise into structures capable of introspection and communication. What, then, are we humans? Patterns of matter and energy evolving in the ‘real’ universe…or are we too information running as part of a simulation built by lofty intelligences, curious about us because they are curious about their origins? Are our avatars brave pioneers of the ‘Third Life’, rather than the second?
And what place does SL occupy in the grand scheme of things? Evolution was using co-operation long before culture developed. A single-celled organism is a vast society of chemicals. An animal is a vast society of cells. Our modern cities are a vast society of animals. Philip Rosedale forsees the metaverse as the next logical step in the emergence of a single entity consisting of a society of interdependent agents. ‘We think a lot about the nature of the brain, and whether computational substrates can be dense enough to enable thinking within them. I know exactly how that’s going down, I think… SL is dreaming. It could be looked at as one collective dream. In an almost neurological sense’.
Are we witnessing the early stages of the emergence of a global mind? Will the TCP/IP nodes of the internet evolve into functioning neurons, resulting in a free-thinking entitity capable of introspecting upon all human knowledge? And if its immense computational prowess dreams of imaginary people, will they wonder about a Creator, and try to reconcile their beliefs with an increasing understanding that the ‘rules’ of their universe evolve complexity from the bottom-up? More importantly, will this happen by 2029? Will Mitch Kapor lose his bet, thanks to the ‘collective dream’ of the Metaverse?
Perhaps we should not use terms like ‘winners’ and ‘losers’ here. Perhaps SL and its successors will not help develop general artificial intelligence, but it already showcases the marvellous abilities of people that Kapor so eloquently expressed in his essay. Read it for yourself, and then dive into SL to see for yourself what we can do with our collective mind. -- The $20,000 Question: An Essay by Extropia DaSilva. Related.
Mediated: How the Media Shapes Your World and the Way You Live in It
On Real and Unreal Time
"(W)e are living a fusion of real and unreal time, an ongoing undulation of overlays and intersections...It's most like the way good old-fashioned thinking and imagining work in relation to sensing and perceiving ... It says that back before representational technologies developed, before literacy itself, people were also living in a fusion of real and unreal time because they were daydreaming while they were doing this or that. Just having a mind is to be in unreal time as well as in real time ... What that says is that representational technologies have colonized our minds ... To the extent that our thoughts no longer wander around on their own, stocked only with materials drawn from direct experience, to the extent that they follow flows of representations instead--to just that extent that we don't think our own thoughts. Literally...
When the term first arose, "real-time" implied speed, intensified velocity. The medium doing the representing was transforming reality into representation immediately. The expression was first used in connection with digital processing of information ... It was a term of praise that focused on how fast a computer could record the file transaction as compared with paper-shuffling clerks. It wasn't until the fact that computers could keep up with events was taken for granted that we noticed that security cameras in public places were real-time media too. And nothing seems slower than those! How strange. Why is that? No editing. No manipulation of what is presented.
In the same way, an innovation like videoconferencing could surprise us with a real-time capacity that the telephone had all along. Bit we only noticed that a lot of analog media were in real time after computers achieved sufficient processing speed to do it too. It was the malleability of digital transformations that made the difference. The fact that we could now manipulate what had once just been conveyed on a screen or over a wire, that's what go the juices going. That's why "interactive" became the mother of all buzz words. The idea that real time emerged when we became players on screens we had once viewed passively. The fusional loop of subject-object that is a video games expresses most cogently the thrill of real-time existence in unreal realms. You tweak the joystick and press the buttons and virtual swords flash and machine guns blaze in some tunnel on asteroid in a distant galaxy--not as a result of, but as a function of, at the same time as, your fingers on the console. You exist as agent and instrument simulateously in two places, in the meat world of fingers and consoles and the virtual world of cyborg warriors. Representational being incarnate. The primordial aim of the human imagination realized--literally "made real.
So "real time" is a compliment we pay to representations that reflect our agency either directly or in the way they conform to our designs subsequently ... Incidentally, remember when people thought that the Web was going to build bridges between communities and inspire cross-cultural understanding, etc.? ... The multiplication of niches has been so intense that the word fragmentation doesn't begin to describe it. What with these search worms and filters and custom advertising hooking you up with stuff you're already interested in ... you can spend your whole life online and never leave your own head." From Mediated: How the Media Shapes Your World and the Way You Live in It by Thomas De Zengotita.
June 27, 2007
Sentient World Simulation
War Games on the Grandest Scale
"Perhaps your real life is so rich you don't have time for another. Even so, the US Department of Defense (DOD) may already be creating a copy of you in an alternate reality to see how long you can go without food or water, or how you will respond to televised propaganda. The DOD is developing a parallel to Planet Earth, with billions of individual "nodes" to reflect every man, woman, and child this side of the dividing line between reality and AR.
Called the Sentient World Simulation (SWS), it will be a "synthetic mirror of the real world with automated continuous calibration with respect to current real-world information", according to a concept paper for the project. "SWS provides an environment for testing Psychological Operations (PSYOP)," the paper reads, so that military leaders can "develop and test multiple courses of action to anticipate and shape behaviors of adversaries, neutrals, and partners".
SWS also replicates financial institutions, utilities, media outlets, and street corner shops. By applying theories of economics and human psychology, its developers believe they can predict how individuals and mobs will respond to various stressors. Yank a country's water supply. Stage a military coup. SWS will tell you what happens next." Continue reading Sentient world: war games on the grandest scale by Mark Baard.
June 22, 2007
Will Web 2.0 Kill Cyberspace?
"[...] These days the boundaries between reality and cyberspace are becoming increasingly blurred and the activities on the Web are becoming more two way and integrated with reality ... With going into cyberspace no longer being a discrete step (folks are more and more always there now) and with the primary activity often being to interact with other folks transparently, and you have a folding of cyberspace so severe that it just disappears into the ether." From Will Web 2.0 Kill Cyberspace? by Dion Hinchcliffe.
June 21, 2007
Toward a Metaverse
[...] The first, relatively simple step toward a Second Earth, many observers predict, will be integrating Second Life's avatars, controls, and modeling tools into the Google Earth environment. Groups of users would then be able to walk, fly, or swim across Google's simulated landscapes and explore intricate 3-D representations of the world's most famous buildings ... A second alternative would be to expand the surface area of Second Life by millions of square kilometers and model the new territory on the real earth, using the same topographical data and surface imagery contained in Google Earth ... That's a much more difficult proposition ...
[W]ithin 10 to 20 years--roughly the same time it took for the Web to become what it is now--something much bigger than either of these alternatives may emerge: a true Metaverse ... It will look like the real earth, and it will support even more users than the Snow Crash cyberworld, functioning as the agora, laboratory, and gateway for almost every type of information-based pursuit. It will be accessible both in its immersive, virtual-reality form and through peepholes like the screen of your cell phone as you make your way through the real world. And like the Web today, it will become "the standard way in which we think of life online..."
While Second Life and Google Earth are commonly mentioned as likely forebears of the Metaverse, no one thinks that Linden Lab and Google will be its lone rulers. Their two systems are interesting mainly because they already have many adherents, and because they exemplify two fundamentally different streams of technology that will be essential to the Metaverse's construction...
"Google Earth itself is really neat," comments Jamais Cascio, the Metaverse Roadmap coauthor. "But Google Earth coupled with millions of sensors around the world, offering you real-time visuals, real-time atmospheric data, and so on--that's transformative."
Indeed, it's important to remember that alongside the construction of the Metaverse, a complementary and equally ambitious infrastructure project is under way. It's the wiring of the entire world, without the wires: tiny radio-connected sensor chips are being attached to everything worth monitoring, including bridges, ventilation systems, light fixtures, mousetraps, shipping pallets, battlefield equipment, even the human body ... "Augmented reality and sensor nets will blend right into virtual worlds," predicts Linden Lab's Ondrejka. "That's when the line between the real world and its virtual representations will start blurring." From Second Earth - The World Wide Web will soon be absorbed into the World Wide Sim: an environment combining elements of Second Life and Google Earth; by Wade Roush, Technology Review.
June 20, 2007
[iDC] game culture (?) (!) (%#@)
[...] I am at the beginning of some research into the area of game physics (that is, the simulation of physics in video games) ... The research is somewhat motivated by several observations:
Physics has an interesting split personality in that it is viewed as very fundamental in the sciences with a lot of 'prestige' (Einstein is a folk-hero), but at the same time Physics seems to be largely rejected as a discussion topic by non-science educated folks. As soon as it gets a bit more detailed and mathematical, most people will react try to avoid Physics. The current state of physics education (low number of graduates, etc.) confirms this.
Games on the other hand are well on the way (if not already there) to become the most used, most influential, most profitable entertainment medium. Therefore one can safely assume that they exert a significant influence on our culture. This trend which will continue in the years to come, especially as graphics capabilities reach photorealistic levels.
Game Physics is an element in video games that was always present and is even at the root of games (SpaceWar, the first video game was a physics simulation). It is becoming even more common due to the 3D and immersive nature of todays video games, because it makes games 'playable'. Generally it is a very important aspect of games since it is directly linked to the interactivity and 'feel' of the gameworld, but as a topic of game theoretical analysis, it is often overlooked.
As for some specific questions, I am currently interested in a discussion on how games affect the relationship we have with the real world. Obviously there are social implications to video game play as we can see from the whole "violence in games" debate. In relationship to Physics, I am looking at more fundamental changes in how we construct truths: Does video game physics create a form of "folk physics" (my immediate answer would be yes) and does that change the way we think or even act? For readers unfamiliar with video games, think of the 'Movie Physics' - which todays games largely adopt - such as the engine roar of a space ship flying by the camera (... this should be silent in vacuum).
So in some sense, the question extends the common "does it matter that movies have pseudo physics?" discussed extensively on site like http://www.intuitor.com/moviephysics/ to "does it matter when video games have pseudo physics?" (and believe me they do!). Why don't game developers try harder and game players expect more?
If one looks critically at mass media today, are we not creating a whole new "church of entertainment" especially with video games, where - at least as far as physical simulations are concerned - the scientific method and precision becomes irrelevant or at best secondary over the goal of implementing the next, better implementation of a game as 'VR drug' or 'consentual hallucinations' as W. Gibson puts it?
Could the trend we see in the popularity for 'documentaries' - especially ones with a scientific slant such as 'An Inconvenient Truth' be extended to mainstream video games? Personally I feel sad to see that Physics is typically reduced to animating ragdoll-enemies, chaingun-bullets and flamethrower-particles ... so why not extend game physics to include more quantum-mechanics? Maybe this would allow us to bridge the gap between game-cultures and science-cultures.
Corrado Morgana wrote:
I think you may be misinterpreting the notion of physics within games...it is not to demonstrate science, but physics as in physicality..objects have weight, mass and motion similar to real life objects and are used as ludic elements..that which foster gameplay..check out Half life 2 et al and much indie gaming which, for a while seemed to be obsessed with physics based gaming..it is simulation within set parameters
Folk physics..yup agreed, but you have to consider the role of games, Freelancer is a damn sight more fun than Orbiter, a very realistic, space flight simulator
There are few games which address scientific issues, OK Prey and the soon to be released Portal may look at disjointed physicality (that word again)..but talking about games and quantum mechanics may be more about pedagogic simulation; difficult game premise
However there may be an answer..
Check out Garry's mod for Half life 2...a physics based sandpit that could ideally be used, I think in your terms
Corrado Morgana...games researcher
iDC -- mailing list of the Institute for Distributed Creativity iDC[at]mailman.thing.net http://mailman.thing.net/cgi-bin/mailman/listinfo/idc
iDC Photo Stream:
June 19, 2007
OFFSHORING SECOND LIFE
"So now, with our active Cayman Residents, the circle of economic unreality is almost surely complete: real money is converted into the currency of a virtual world, which is then converted back into the real money of a semi-virtual country, where it becomes the assets of a company that only exists as a post office box by the Caribbean sea."
Linden Lab's latest user stats include a fascinating tidbit, first unearthed by Celebrity Millionsofus, who used them to compile a list of active users by national origin, then listed these on the blog of Millions of Us (a sponsoring partner of NWN.)
Here are the first ten countries where Second Life Residents are most active, based on the average number of minutes they spend in-world per day, per user: 161 - Cayman Islands; 132 - Indonesia; 130 - Netherlands; 128 - Canada; 122 - United States; 122 - Korea, Republic of; 116 - Russian Federation; 109 - French Southern Territories; 104 - Antigua and Barbuda; 103 - Martinique.
A lot of smaller nations, and a few you might not expect-- a localized version of SL has only recently been released to the South Korean market, for example, and it already boasts 1018 power users.
Second Life's most active Residents, however, are from a country you'd easily recognize, if you read a lot of thriller novels.
Located in the Western Caribbean and a prime destination for scuba divers, the Cayman Islands are a sovereign territory of Great Britain, with a tiny population of some 45,000. But it's mostly known as the world's center for offshore banking and business, the place where companies and wealthy investors from around the globe have their offices and savings, so they can take advantage of the country's low taxes and minimal regulation. (As of 2005, an incredible 70,000 companies are incorporated there.) This encourages a belief that it's a nexus for money laundering and other criminal activities-- it's where Tom Cruise's law office spirits away mob money, in The Firm-- but for what it's worth, an IMF report notes that "the overall compliance culture within Cayman is very strong, including... [that] related to anti-money laundering obligations." In any case, it is where a lot of people from around the world have a lot of their money stored away, when they don't want their own governments taxing or otherwise regulating it. Indeed, besides tourism, managing this activity is the Caymans' only main industry.
And last month, this is where 36 Second Life Residents were, going in-world some 2.7 hours a day, more than Residents from any other country. (Other nations in Celebrity's most active thirty are also known for being offshore bank/tax havens-- Antigua and Barbuda, Aruba, and Netherland Antilles, among others.)
Maybe they're all just tour guides and secretaries who like to play Tringo and dance, in their off hours. But it's entirely plausible to assume at least some of them are in there converting large sums of Linden Dollars to US Dollars and thence to KYD, the Cayman Islands Dollar. (And if they are, I'd hasten to add, there's no reason to suspect anything illegal or unethical.)
No, the point I'd make here is a different one. Over the last several decades, the Caymans have existed as an artificial appendage to the global economy, a country that thousands of corporations call home-- even if their headquarters is a mere mailing address. (And here's a good place to note that corporations are themselves a kind of avatar, a fictitious entity authorized by governments to conduct business as if they were a person.)
So now, with our active Cayman Residents, the circle of economic unreality is almost surely complete: real money is converted into the currency of a virtual world, which is then converted back into the real money of a semi-virtual country, where it becomes the assets of a company that only exists as a post office box by the Caribbean sea.
Pictured: Cayman island of Second Life (not affiliated with RL Cayman Islands.) [blogged by Hamlet on New World Notes]
"The virtual communities created by online games have provided us with a new medium for social interaction and communication. Avatar Machine is a system which replicates the aesthetics and visuals of third person gaming, allowing the user to view themselves as a virtual character in real space via a head mounted interface. The system potentially allows for a diminished sense of social responsibility, and could lead the user to demonstrate behaviours normally reserved for the gaming environment."
That was the blurb i read on the website of Charming Disaster, an exhibition featuring several works created by students of the Royal College of Art’s Design Products department (thanks Noam for telling me about it!) What i like about Avatar Machine is that, like the One Eye Ball but unlike several similar projects i've blogged in the past, it's not just about coming up with a nice, funky, geeky project before everybody else, it is also a very eye-pleasing work. I love the way that the designer pushed the concept further by making the user wear the costume of an avatar, i imagine that it allows observers to participate (albeit in a much more discreet way) to the experience. So i asked design student Marc Owens to tell me more about the work.
How does it work technically?
The system works in a very simple way. The user wears a body harness, which has three 2m long aluminium rods protruding from it, to form a type of tripod. A wide angle pinhole camera exists at the point where all three rods meet. The camera is pointing directly back at the user. The video footage being recorded by the camera is transmited to the monitor inside the headset so it can be viewed by the user. Therefore the user can see themselves in the third person, from head to toe on the monitor interface.
It is WoW that inspired the kind of costume that the player has to wear in your installation, did i get that right? Any reason why you chose to refer to WoW?
You are correct in thinking that the project is World of Warcraft inspired. As the worlds most popular online game, the asthetics and characters of WOW are the most easily recognisable, also i wanted the character i created through the costume be be large in stature, so the user could experience a sense in invincibility when controlling the avatar on the interface. Also, WOW, is more classically fantasy based than other MMORPGs, like second life for example, so that is an element i wanted to bring into the experience of the product.
Thirdly, i am ashamed to admit, quite a fan of World of Warcraft!
Which kind of behaviour did you observe when visitors of the Charming Disaster were playing with your work?
At the Charming Disaster show a few weeks ago, the screen within the headset burned out after an hour into the performance. So only one or two people had the opportunity to experience the system. However since then, i have carried out some avatar sessions in Hyde Park, allowing ample room for the user to do as they wish, and behave as they like.
The types of behaviour i observed were all quite similar. That being, everyone was quite cautious with their movements to begin with, moving around with baby steps as they slowly got used to controling their movement from the third person perspective. After a few minutes, users began to gain confidence not only with faster and more fluid movement, but also began to mimic the types on movement that they imagined the avatar would demonstrate, ie: stoping around and swinging of arms. Another element to the type of behaviour i observed was that after getting used to using the system, users felt comfortable enough to approach passing 'humans' and observed their reaction through the interface.
June 15, 2007
June 06, 2007
Brian Eno and the Long Now in Second Life
Advance word comes to 3pointD that the Long Now Foundation (which has to be one of the coolest organizations on the face of the planet) will bring Brian Eno’s art installation, 77 Million Paintings, to the virtual world of Second Life at the end of June, concurrent with the show’s real-world opening. The virtual show is being built out by a startup metaverse services firm known as blueair.tv. For those who don’t know it, the Long Now Foundation is developing the world’s slowest computer, which is meant to “do for thinking about time what the photographs of Earth from space have done for thinking about the environment.” As the site points out, “Such icons reframe the way people think.” The term “long now” was coined by Eno, who, for those who don’t know him, is one of the most influential contemporary musicians around, and also a Long Now board member. Eno has also been more closely involved with things metaversal of late, having hooked up with Will Wright some time back. (And if you don’t know who Will Wright is, you had really better start doing your homework.) No details yet of what the SL opening and build will feature, but if Eno is making an appearance it’s sure to be extra cool. [posted by Mark Wallace on 3pointD]
De Pong Game
De Pong Game is a recycling of the famous game PONG [Atari 1975-1977]. This new version has been built with Flash [Action Script] by Arjan Westerdiep for Recyclism™ (Benjamin Gaulon).
This project is exploring the concept of Augmented Reality by using and interacting with urban architectures [buildings] as background for the game. Thus the game is projected on a building and the limits of that building are becoming the limits of the game area. The ball projected on the building bounces along the limits of the walls. The software is also using the windows as an obstacles for the game. So the ball is limited to frame of the building. As you touch the ball with the slider, its speed increases and because the ball bounces on all the obstacles of the architecture it becomes more and more difficult to play.
June 04, 2007
Turbulence Awards Five Commissions
Turbulence Commissions: Mixed Realities :: Venues: Ars Virtua (Second Life); Huret & Spector Gallery (Emerson College, Boston); and Turbulence.org :: With funds from the Andy Warhol Foundation for the Visual Arts.
New Radio and Performing Arts, Inc. is pleased to announce the winners of "Mixed Realities," an international juried competition and exhibition. The five commissioned works ($5,000 each) will be exhibited simultaneously at Ars Virtua (Second Life), Huret & Spector Gallery (Boston), and Turbulence.org in spring 2008. They are:
IMAGING KALININGRAD: THE SEVEN BRIDGES OF KOENIGSBERG by John (Craig) Freeman; REMOTELY COUPLED DEVICES (WORKING TITLE) by Usman Haque, Georg Tremmel and Neill Zero; NO MATTER by Scott Kildall and Victoria Scott; THE VITRUVIAN WORLD by Michael Takeo Magruder, Drew Baker and David Steele; and CATERWAUL by Pierre Proske, with technical assistance from Artem Baguinski and Brigit Lichtenegger.
KALININGRAD AND THE SEVEN BRIDGES OF KOENIGSBERG by John (Craig) Freeman is an extension of Freeman’s "Imaging Place" project; a place-based, immersive, virtual reality project that takes the form of a user navigated, interactive computer program combining panoramic photography, digital video, and three-dimensional technologies to investigate and document situations where the forces of globalization are impacting the lives of individuals in local communities. The famous Swiss mathematician Leonard Euler invented Topology--the key to understanding how networks are formed--in the city of Konigsberg, now Kaliningrad, in 1735. When Euler visited the city, there were seven bridges connecting the city's center to the banks of the Pregel River. A favorite pastime for visitors was to try to solve the puzzle of whether or not a person could walk throughout the city and cross each of the bridges exactly once. Freeman will retrace the imaginary steps of Leonard Euler across these seven bridges. Users will be able to navigate the virtual space as the story of the seven bridges unfolds, as well as the story of how globalization is impacting this remote Russian city. Full proposal.
REMOTELY COUPLED DEVICES by Usman Haque, Georg Tremmel and Neill Zero: This proposal takes as its starting point that the distinction between 'virtual' and 'real' is as quaint as the nineteenth century distinction between 'mind' and 'body'. As such, the spaces of Ars Virtua and Huret & Spector Gallery will be treated equivalently. A device will be constructed for each space which--via the EnvironmentXML framework--will enable people around the world to build remote devices and environments that respond in real time to the local environmental conditions of the two galleries. A Java applet and historical data repository, residing on the Turbulence.org server, will enable the tracking of real time and historical sensory data from the two locations as well as the connections that people make over time. Full proposal.
NO MATTER by Scott Kildall and Victoria Scott: No Matter is an interactive installation that translates the psychology of the Second Life virtual economy into physical space. Virtual economies--where artificial currencies are exchanged in online worlds for dematerialized goods and services--are presently impacting the ‘real’ economy. The leakage of one to another appears on eBay, on the SL Exchange and through virtual land brokers. This mixed economic model overturns established relationships between labor and production. As specialty items can be obtained at a fraction of their real world cost, our perception of the value of objects becomes further obscured. No Matter reflects this conflation of imaginary and real economics by: (1) commissioning the creation of imaginary objects in Second Life; (2) inviting viewers to re-construct these immaterial 3D objects in physical space; (3) paying them Second Life wages; and (4) reselling the replicas on eBay. Full proposal.
THE VITRUVIAN WORLD by Michael Takeo Magruder, Drew Baker and David Steele: In the 1st century BC, Roman writer, architect and engineer Vitruvius authored specific building formulae based on the guiding principles of strength, utility and beauty. For him, architecture is intrinsically linked to nature and is an imitation of cosmic order. The most well-known interpretation of this postulate is the Vitruvian Man by Leonardo da Vinci in which the human form is depicted in unity with the square and circle--representing material and spiritual existence respectively. Given that this tripart union of human body, material form and spiritual essence maintains relevance within the current climate of distributed presence, mixed realities and internet culture, The Vitruvian World will embody the principles of Vitruvius within a contemporary context. Full proposal.
CATERWAUL by Pierre Proske, with technical assistance from Artem Baguinski and Brigit Lichtenegger: When someone screams in real life, do they hear us in virtual reality? Do they want to? The proliferation of networked online worlds has provided a small quota of the human race the option to seek refuge in utopian, less troubled imaginary lands. Rolling synthetic green pastures offer us respite from a planet undergoing exploitation and climate change. For those of us too firmly rooted in this material world to join them, how shall we communicate with them? In what way shall we lament their departure? The essence of Caterwaul is a large, monolithic, dark wall that is represented both in the real and virtual worlds. It is a one-way portal to the virtual world through which people can whisper their thoughts, scream their frustrations and convey regret without the privilege of reply. It is a wailing wall through which to mourn the loss of our humanity to the virtual network. Full proposal.
The competition was juried by Yasmine Abbas, Founder, Neo-Nomad; Michael Frumin, Technical Director Emeritus, Eyebeam; James Morgan, Director, Ars Virtua; Trebor Scholz, Founder, Institute for Distributed Creativity; and Helen Thorington, Co-Director, Turbulence.
May 23, 2007
Annotating for Civic Engagement
The Electronic Lens explores and creates new paradigms of civic ubiquitous networking with mobile technologies. We think of Electronic Lens as something of a viewfinder. Using a motion that is already familiar (think point and shoot camera phones), the citizen can use the eLens to gather information about physical objects and places.
The eLens matches electronic information with the physical environment in an innovative way. For example, eLens users can post lasting messages in physical locations, tag buildings and places, or create social networks based on interest and social affinities. eLens interactions combine the physical environment with formal and institutional information and the annotations from users’ personal experiences.
Ultimately the eLens enhances the value of the city for its citizens by making their environments more accessible, more culturally vibrant, more socially just. The eLens fosters communication among people and between institutions; as a result citizens are now better able to navigate the social, institutional and physical urban space.
May 18, 2007
Intervention in Public Space 2007
256² is an intervention in Public Space. In this temporary piece the outline of the virtual NewBerlin in Second Life was marked by Aram Bartholl in physical space at Alexanderplatz Berlin, 15.03.2007.
Jan Northoff and Tobias Neisecke from YOUseeMEin3D.com had the vision to rebuild Berlin entirely in Second Life (they have a reservation on over 3000 Sims!). In spring 2007 they started to build the center of NewBerlin in Second Life. The TV tower at Alexanderplatz is right in the middle of the first Sim. Sim (Simulator) is the land unit of Second Life with a size of 256 x 256 meter in virtual space. But a Sim is not just a square surface in virtual space. In terms of software and hardware each Sim is a seperate instance. One Sim and all action on this Sim is hosted by one server. Also the billing model of Linden Labs, the company who is running Second Life is based on that unit.
The owner of a Sim in Second Life is in control of many functions and properties of that "land": When is night- and when is daylight? May any visitors of that sim create objects? Will these objects stay on that sim? Is health/damage activated? (Avatars can die)... and many more options.
The idea of the public intervention "256²" was to make the first Sim of Second Life's NewBerlin visible in real life Public Space at Alexanderplatz Berlin. Equiped with 64 pieces of plain chalk ( 1piece of chalk = 16 m, 64 pieces = 1024 m) Aram Bartholl drew the outline of this 256 x 256 meter square in public space. The line of chalk, representing the virtual copy of Alexanderplatz crossed a McDonalds, a Church, two housing blocks, a trainstation and a mall, among other objects and surfaces.
"256²" is produced in cooperation with Jan Northoff and Tobias Neisecke from YOUseeMEin3D.com, Thanks to the team for documentation and support.
May 10, 2007
Wafaa Bilal: Domestic Tension
Iraqi born artist Wafaa Bilal has become known for provocative interactive video installations. Many of Bilal's projects over the past few years have addressed the dichotomy of the virtual vs. the real.
He attempts to keep in mind the relationship of the viewer to the artwork, with one of his main objectives transforming the normally passive experience of viewing art into an active participation. In Domestic Tension, viewers can log onto the internet to contact, or shoot, Bilal with paintball guns.
Bilal’s objective is to raise awareness of virtual war and privacy, or lack thereof, in the digital age. During the course of the exhibition, Bilal will confine himself to the gallery space. During the installation, people will have 24-hour virtual access to the space via the Internet. They will have the ability to watch Bilal and interact with him through a live web-cam and chat room. Should they choose to do so, viewers will also have the option to shoot Bilal with a paintball gun, transforming the virtual experience into a very physical one.
Bilal’s self imposed confinement is designed to raise awareness about the life of the Iraqi people and the home confinement they face due to the both the violent and the virtual war they face on a daily basis. This sensational approach to the war is meant to engage people who may not be willing to engage in political dialogue through conventional means. Domestic Tension will depict the suffering of war not through human displays of dramatic emotion, but through engaging people in the sort of playful interactive-video game with which they are familiar.
For the duration of May, 2007, Iraqi born artist Wafaa Bilal will live in the FlatFile Galleries in Chicago. The public can watch him 24 hours a day over a live webcam; and if they choose, visitors to his website can shoot him with a remote controlled paintball gun.
Bilal’s self imposed confinement is designed to raise awareness about the life of the Iraqi people and the home confinement they face due to the both the violent and the virtual war they face on a daily basis.
You can participate - eg shoot at him with a paintball gun - by clicking here.
May 07, 2007
Immersive Sight Within the Third Space:
Augmentation and Spatial Interface in Exhibition Space
"Our field of vision is a continual, multi-tiered number crunching. Bicameral sight is always being processed, interpreted, reacted to, adjusted for focus, comparisons made. It simply is always running as an immersive, multi layered interaction of information and movement in a space. The logical progression of virtual reality is into augmented reality with smaller lenses and data fit more discreetly and logically layered into one's natural field of vision. This has many applications in the traditional museum space. The eye through the cerebral cortex processes and contextualizes constantly at a rapid rate. The key is to allow spatial augmentation to do the same. A fast engine can adjust levels of data and visuals quickly as the participant moves and adjusts their desired experience. This allows another sense of sight." From Immersive Sight Within the Third Space: Augmentation and Spatial Interface in Exhibition Space by Jeremy Hight, Neme.
April 13, 2007
Pierrick Sorin: La Pietra del Paragone
There is an interesting article on Pierrick Sorins (a French video artist) latest collaborative venture in this months Artpress (April issue, number 333) La Pietra del Paragone or The Touchstone in english ... an opera by Rossini which was at Paris’ Châtelet theatre in January.
I’m not a hugh opera fan but the staging of this particular opera seems as if it was an event to see. The singers perform on stage against a bluescreen which is filmed by a trio of cameras positioned centre stage clearly visible to the audience. Simultanously a scale model of a set (stage right) is filmed by another trio of cameras. The two sets of video are paired off, left, centre and right and each cooresponding two are superimposed and projected on three giant screens above the stage to give the impression that the singers are actually in front of a real set.
A similar set up to those used in news and weather forecasts however here the end result is seeing the composited image and its construction in real time. The possibities this gives for large scale and rapid set changes or even the construction of impossible sets, using areas of the stage which normally prove difficult to use etc. are suburb. The network here? Well its the linking of real and simulated to create a very different type of virtual reality to what must be comic effect in real time.
A television broadcast and DVD of the opera is planned for the near future. To see other work by Pierrick Sorin see his website. Below is a video of some of his video installations created for Chanel in 2001.
April 05, 2007
Roy Block: A platform game with real platforms
Experimental gaming these days is all about playing with reality. From virtual reality and alternate reality to augmented reality, more and more people are trying to merge video games and real life in interesting ways. Sebastien Schmieg's art project/game Roy Block adds another term to this growing list: mixed reality.
Roy Block uses real life, handheld wooden building blocks as the platforms for an on-screen avatar. A hidden camera detects the blocks as they're pressed against the tracing paper projection screen, translating their position and alignment to in-game data.
The "gameplay" in the project is pretty basic -- just guide the periodically jumping Roy from one end of the screen to the other while avoiding floating enemies -- but Schmieg sees the potential for more complex play by assigning different functions to each side of the blocks. Schmieg also has an idea for a version "as big as a wall ... with blocks so big that you need both hands to hold one." Would that be mixed reality exergaming? More buzzwords, stat! [Posted by Kyle Orland on Joystiq]
April 04, 2007
Day Of The Figurines at Lighthouse, Brighton
Blast Theory presents the world premiere of Day Of The Figurines, a mass participation artwork using mobile phones that is part board game and part secret society. Set in a fictional English town that is littered, dark and underpinned with steady decay, the game unfolds over 24 days, each day representing an hour in the life of the town. Up to 1000 players place their plastic figurines onto the board. They are moved by hand in a meticulous performance throughout the duration of the exhibition.
Players participate by sending text messages. They must help other players as they receive updates from the town, missions and dilemmas. They can also chat to players who are near them in the town using text messages as events unfold in the town: a gig by Scandinavian death metallists, an invasion by an Arabic army, a summer fete. Day Of The Figurines is the world’s first MUD (Multi User Domain) for mobile phones.
4th to 27th April 12 – 4pm
Venue: Lighthouse, 28 Kensington Street, Brighton, BN1 4AJ
Day Of The Figurines was developed by Blast Theory in collaboration with the Mixed Reality Lab at The University of Nottingham, Sony Net Services and The Fraunhofer Institute as part of the European research project IPerG (Integrated Project on Pervasive Gaming).
Additional tour Dates
Can You See Me Now? won the Golden Nica for Interactive Art at Prix Ars Electronica and was nominated for a BAFTA Award.
April 02, 2007
Judgement Day for 1st Life Game Figures
Tagged City Play for Real Players in Real Cities
Attracted by the slogan Become a game figure by implant!, participants were invited to get an injection of “RFID Judgement tags” under the skin. They then become Real Players, 1st life personae who are also game figures in the Reality Engine while playing in a real city. They can drive tuned Plymouth racing cars to tag the city and receive a tagging toolbox containing graffiti, spray stencils, stickers, RFID stickers and implant injection kits.
Real objects in the city are subjectively chosen for tagging. The tags are functional but useless (RFID-tags with zero data.) By putting this zero-tag on an object, players de-valuate real world things into virtual play-objects. If the Real Players find a tagged object with a value assigned to it, they zap it. The goal is to change the value of tags into the value Zero by using their “Wunderbäumchen” (inspired by the car air fresheners in the shape of a pine), technical toys used for finding and reading tags and/or emitting a target-oriented electro magnetic pulse..." Continue reading >> [blogged by Regine on we-make-moeny-not-art]
March 25, 2007
Ideas + Survey
Generic Infrastructures  by Rob van Kranenburg: Today we are in the worst situation imaginable. Our global and undisputed computing paradigm posits that computing processes are successful only in as much as they disappear from view. Our design focus is ever more following Philips untenable but seductive ‘sense and simplicity’ resulting in the-bug-as-a-feature-design of the Ipod Shuffle. Our educational system is following this systemic hide-complexity strategy that favors the large industrial labs, IT conglomerates and above all their clinging to notions of IP and the patent that are firmy tied to their notions of doing business and making money. And our users, us? We are YOU, the most influential person of the year 2006, according to TIME Magazine. You fill the Wikipedia entries in your spare time, you blog your daily activities, you co-bookmark on de.l.i.c.i.o.u.s, upload your photos to flickr, you buy mating gear in Second Life, and mark your position on Plazer or Google Earth. You fill out the forms. Isn’t it time you start questioning the principles behind the formats? And, to make matters even worse, your naïve ideas of sharing are corrupting notions of privacy, transparency and informational architecture symmetry.
Ludium II - Synthetic Worlds and Public Policy by Edward Castronova: Synthetic worlds – million-player online environments with genuine markets, societies, and cultures – are exploding in size and significance. Real world governments around the globe are beginning to grapple with their implications in the areas of taxation, intellectual property laws, consumer rights, addiction, violence, and more. Should synthetic worlds be controlled by developers, or by governments, or both? What about the rights of users? What general norms should legislatures and courts follow? More NOEMA >>
March 21, 2007
Virtual actors take to the stage
"Beamed" from Florida to Illinois
"Actors working in real time from remote locations have been beamed onto a stage where they performed with live, in-the-flesh actors, US researchers say. The broadband-enabled experience allowed the audience to watch a seamless, 3D performance, according to the University of Central Florida. The technology could mean future theatergoers might attend plays where one or more actors are working outside the venue, even in a different country, or from their own homes. "We are not talking about holograms yet or the kind of imagery that requires funky glasses," says Professor John Shafer, a member of the cast. "[But] what we have done for this production has indeed pushed the envelope significantly. The production is a small historical step forward on several levels."
Shafer was hooked to a receiving and transmitting broadband-connected computer that can pull 130 megabytes of data in an instant. Although he performed in Florida, his body was 'beamed' onto a stage at Bradley University in Illinois, where he performed 'with' live actors there, as well as with actors beamed in from the University of Waterloo in Canada. Both 3D and 2D sets consisting of multiple screens and special-effects lighting that helped to give the impression that all the actors, remote or not, existed in a single space." Continue reading Virtual actors take to the stage by Jennifer Viegas, ABC Australia News In Science. [via]
March 20, 2007
Interview with Julian Bleecker
On Digitizing Everything and More
"[...] Laminating the lives is an interesting point. Let's look a bit more closely at the current interactions between those worlds: so we have objects with social qualities, thanks to their net-savviness. On the other hand - so it seems - there's a movement to create the virtual realities we had been told about for decades, with Linden Lab's Philip Rosedale talking about "digitizing everything". Are those two discrete notions, or are they part of one process and if so, what kind of reality would it possibly lead to?
I think this notion of digitizing everything is a bit misguided. It presumes that most everything should be digital, without consideration as to what it means to have particular human experiences or activities transferred into digital form. It's a kind of digital-era imperialism or evangelization of the database gospel - "if it can be structured as data, put it on the Internets" - or something. It has so many things wrong about it, beginning with a lack of any sort of critical inquiry as to what it means, or why one would think it worth while, for instance, to have make digital shopping malls in Second Life.
If the project of the digital age is to make everything that we have in "1st life" available in 2nd life, then I think we're on the wrong path. Laminating 1st life and 2nd life isn't about creating digital analogs. It's about elevating human experience in simple and profound ways. This blogject project is an early manifestation of what I think we will start seeing as clever tinkerers experiment with creating meaningful bridges between 1st life and 2nd life in which ethics precedes doing something "just 'cause" it's possible. And those bridges come firstly in very simple expressions of 1st life activity in 2nd life, or 2nd life activity in 1st life.
Bruce Sterling has a great turn-of-phrase I once heard him speak - "we will get the future we deserve." And in this case it means if we want Gap Stores, shopping malls and advertising signage in Second Life, that's what we'll get. But I think many people want something that will yield more habitable worlds, not more efficient ways to market and get people to buy crap. We could create impacts and shape thinking and behavior with digital networks, particularly ones that speak to 1st life. We can create bridges that capture, share and disseminate the current, day by day state of the thinning northern ice cap. We can create a 1st life / 2nd life bridge that makes this condition as present, as impactful and as resonant as a dripping faucet in the next room, rather than an abstraction only occasionally brought to our mind through a newspaper article or cocktail party conversation..." From Sascha Pohflepp's Interview with Julian Bleecker on we-make-money-not-art.
March 14, 2007
SXSW Xcript: Joi Ito and Justin Hall
Difference Between Simulation and Metaphor
"[...] Richard Bartle thought audio shatters the fantasy of MMOs. The fact of the matter I believe is that the fantasy is already broken. We used to have this notion of being in cyberspace or being in the real world, but there is no difference any more. The game is just a common activity we do in this community, my guildies IM me, I have guild chat on my speakers in my house. I don’t think separating that out as a fantasy is practical any more.
One criticism of 3D is, text is better because in 3D there’s nothing left to the imagination. Steven Johnson, who wrote a book called Everything Bad is Good For You, responded to that in this way: Is it better to sit in your bedroom and imagine going to Paris, or actually go to Paris?
Difference between simulation and metaphor: Second Life is more simulation. WoW, where you have 40 people gathering to enter Molten Core here, has nothing to do with real life, but being together and the leadership involved is metaphorically similar to those same things in the real world. John Seely Brown uses the word ensemble: when you’re playing music together and get in the zone, etc.: Shows video of boss fight set to Strauss waltz. This feeling of it just working and having all those hours of time pay off is a reward you get. Really young people who have never been in a group like this suddenly understand why they’re doing what they’re doing. You tell them in church to be quiet, they don’t know why, and it’s hard to explain. If you’re trying to get ready and prepare for a fight with a dragon, they know why..." [blogged by Mark Wallace on 3pointD]
March 13, 2007
A little bit on digital marks, I selected a variety of them.
The semacode, a two dimensional code that encodes a URL. The picture (left) is the semacode of architectradure. Thank you Michael Surtees for the link! This tag embed the URL address of my blog, that can be read by your cell phone and send you to its page. I guess it avoids typing in the URL and you can rapidly go through a series of web sites using the respective tags.
It is especially useful for combining physical space to digital content. The Semacode's Software Development Kit has is developed for ubiquitous computing by creating visual tags for objects and contexts, and read them using a mobile camera phone. The physical Wikipedia called Semapedia, created by Alexis Rondeau and Stan Wiechers, allows you to add place tags on places and things to link them to the relevant Wikipedia articles.
[images: trash can with a wikipedia tag] Semacode technical paper.
Urban Tapestries allows public mapping and sharing by combining mobile and internet technologies with geographic information systems. This system was linked to Natalie Jeremijenko's famous feral robots -open source robots for investigating contaminated urban sites- and called Robotic Feral Public Authoring: "Adding the sensor readings to online mapping tools, such as Urban Tapestries, suddenly brings the relationships between environment and home vividly to life. It enables people to feel they can learn about their environment and have the evidence to do something about it"
Yellow Arrow allows a community to tag places using arrows. You can post a message using the arrow and anyone could retrieve it using their cell phone. Another method to link digital content to a physical place. The community of yellow arrow is quite big. Their blog.
Elens allows anyone to create talking landmarks. Developed by the MIT Media Lab it allows anyone to tag a place by adding a sticker on a physical location, sticker that can later be scanned by a cell phone, in this case the Motorola A1000.
M-views developed at the MIT media lab in the interactive cinema group -media fabrics- with Glorianna Davenport, explores the "ideas, methods, and culture of mobile cinema, which is experienced in temporal and spatial narrative segments that can be delivered on context-aware mobile devices."
In 2002, I researched with Glorianna Davenport on technologies to allow digital information to communicate with the physical space. I worked on Passing Glances a system that enables users to create ambient urban interludes through the use of SMS text messages. Associated graphics and storytelling were projected in the urban space.
Enarrative5 2003 paper
With these tags, the physical space is tagged to the digital space. One can think the other way around and tag the virtual space with physical content. That is what Josh Lifton told me he was working on the other day. Josh created a plug sensor/actuator network, called the dual reality lab, that links the MIT Media Laboratory space to a virtual lab space in the Second Life online virtual world.
["Dual reality" is the concept of maintaining two worlds, one virtual and one real, that reflect, influence, and merge into each other by means of deeply embedded sensor/actuator networks. Both the real and virtual components of a dual reality are complete unto themselves, but are enriched by their mutual interaction. The dual reality Media Lab is an example of such a dual reality, as enabled the Plug sensor/actuator network that links our actual lab space to a virtual lab space in the Second Life online virtual world.]
March 12, 2007
[iDC] Second Life wrap-up; thanks
On Gaming, Labor, and Education
[Image: Ulises Mejias speaking at Emerson College in Second Life, February 28, 2007 by John Craig Freeman] Johua Levy wrote:
Thanks all for a stimulating discussion about Second Life, gaming, labor, and education.
Who is manufacturing virtual worlds and MMOs? In response to Michel Bauwens question about this, Ana Valdes points out that the games market is almost 100 percent American, with these large companies having bought our smaller European companies over time. However, Julian Dibbell points to a chart that suggests that U.S. companies are in fact responsible for only 40-60% of worldwide games and the us market share is 61.3%. The question of market dominance vs. ideological dominance comes into play here. Ana argues that, even if the market share isn't 100%, most video games share an ideology with the U.S., though Michel says he witnesses in Thailand dominant themes from Korea and Japan.
And on to the big L, which inspired some of the best discussion on the cultural ramifications of virtual worlds, virtual labor, and virtual economies. Trebor opened the discussion by questioning the need to replicate the architecture of real-world sites within Second Life. "Why do we need a replication of our own campus? Why not rather build a Black Mountain College with a Bauhaus Annex? Why teach in this virtual environment? Will SecondLife become a 3D version of Wikipedia, a virtual knowledge bank that offers a playful and fun interface to participant-generated content? Will students simply demand such playful access to knowledge?" he asks. Eric Gordon offers a compelling argument for why he helped reproduce Emerson College's architecture in SL: "our decision to reproduce the architectural layout of campus and to recreate the Boston Common was deliberately made to correspond with our understanding of the platform's possibilities. We see Second Life as a way of creatively re-imagining the space. While, we're not able to screen student work in the physical Boston Common, it will be possible to do so in Second Life."
In addition to this recreation of material space, he finds that SL mirrors "first life capitalism" as well, that inequalities between labor and capital exist there as they do anywhere else in the world. Like historical relations between labor and capital, Trebor argues that users of sociable web media are not aware of their servitude towards the owners of those systems, though, like Michel, I take issue with his assertion that "many people in the US actually think that they are 'happy' and perceive this distributed labor of the sociable web as a fun leisure activity." We are not in a position to judge what many people in the U.S. think about their station in life, and to imply that the distributed labor of the sociable web simply provides gains for the owners of capital while pulling the wool over the eyes of the participants isn't fair towards either party.
Alan Clinton offers a refreshing take on the problem of virtual labor: "At the risk of revising Marcuse, couldn't we say that consciousness of servitude is not really the problem so much as providing strategies for political agency? People who are laboring know that they are laboring. People (and let's not dismiss the global south so quickly) who are suffering the violence of capitalism know they are suffering the violence of servitude. They may lack awareness of ways to name this violence or attack it, but they are not unaware of their suffering."
In response to the problem of proprietary systems like SL posing as open platforms, Andreas Schiffler suggests a radical, peer-to-peer system that involving shared servers and open source software that become a challenge to the "'Operating System + Deskop' metaphor sold by Microsoft and Apple." This setup could also provide an open source and peer-to-peer alternative to SL.
In response to Simon Biggs' provocation that "SL is a misnomer. It is not a second life but simply a kind of first life, as constructed by a dominant elite, represented in such a manner that it will function to further inculcate and embed its associated ideology on a global scale. It will sustain the fundamental ethic of consumerism...that we are all potential suckers or grifters (often both) and that nobody is responsible for what happens to anybody else. In short, it is another rip off culture," I would point him to a group that I'm involved with, RootsCampSL, progressive activists that use SL as a platform for their work. No one that I know there believes that their work stays in SL, but that it offers a unique space (in addition to other unique space) from whic to get the message out. I would agree that SL is not a second life but in fact an extension of first life, but I have failed to find a dominant ideology there and in fact find it a fertile training ground for almost any ideology at all -- kind of like first life. Of course, I could just be blind to my own exploitation...
And Charlie Gere helps us remember that terrorism, exploitation, or even rape in SL are not the same as their real-world counterparts. "Again imagine the reaction of someone who has been involved in attempting to build and sustain communities in, for example, Iraq or Palestine, listening to someone describe the problems of community building in SL. I think grasping and holding onto this distinction is incredibly important." We need to keep perspective when talking about these virtual worlds and to remember that, however they provide us with experiential or spiritual stimulation, they are still secondary to the actual life-or-death circumstances most global citizens face.
Looking forward to more discussion of this going forward; I trust that, in the face of so much media hype that inflates the economic and sensational aspects of SL, we can all provide an ongoing counter-commentary that provides a little more depth and context.
Trebor Scholz wrote:
You can now read Henry Jenkins' response to the iDC discussion about Second Life on his blog. To understand his comments in context go here (He invites you to leave comments).
All of these examples work because Second Life does not perfectly mirror the reality of our First Lives, yet we could point to countless other more mundane and everyday ways that Second Life and other multiverses can and are being used to facilitate meetings in real world organizations, including those which result in all kinds of real world political effects.
That said, as Steven Shaviro notes on the iDC discussion list, there are some limits to the kinds of politics that can be conducted through Second Life at the present time:
Overall, Second Life is connected enough to "first life," and mirrors it closely enough in all sorts of ways, that we can pretty much do "there" the same sorts of things -- especially collaborative, social things -- that we do "here."... A protest against the Iraq war in Second Life is little more than an empty symbolic gesture; but one might cynically argue, especially given the tendency of the media to ignore them, that real-world protests against the war , however many people they draw, are at this point little more than empty symbolic gestures either.
On the other hand, I don't think that one could find any equivalent in Second Life of political organizing that takes place in "first life": if only because the people in Second Life are a fairly narrow, self-selected and affluent, group.
This goes back to the debate we've been having here about whether Second Life participants constitute a niche or an elite. Either way, the inhabitants of Second Life certainly are not a representative cross section of the society as a whole and there are many people who are excluded through technological or economic barriers to being able to participate in this world. These factors limit the political uses that can be made of SL: they make it hard for us to insure that a diversity of opinions are represented through the kinds of political deliberations that occur here; they makes it easy for participants to ignore some real world constraints on political participation, starting with the challenges of overcoming the digital divide and the participation gap; they make it hard to insure the visibility of online political actions within mainstream media.
That said, I don't think we can discount the political and personal impact that these online experiences may have on the residents of SL. We simply need a broader range of models for what a virtual politics might look like and need to understand what claims are being made when we debate the political impact of these virtual worlds.
Another list participant, Charlie Geer, goes a lot further in dismissing the value of Second Life. He takes issue with my claim that the participatory culture represented on SL is worth defending. Here's part of what he wrote:
It would seem to me obvious that trying to make some sense of and find ways of mitigating the violence and injustice in the complex world and culture we already necessarily inhabit, not least bodily, is far more pressing and considerably more worth defending than any supposed capacity to 'design and inhabit our own worlds and construct our own culture'. This seems to me to be at best a license for mass solipsism and at worse something like the kind of thinking that undergirds much totalitarianism, as well as an evasion of our responsibilities to the world as we find it. Such a fantasy seems to be at play in both the relentless construction and assertion of identity', a drive that militates against proper social solidarity, and thus plays into the hands of those sustaining the status quo, as well as the fantasy entertained by the Bush government that the Middle East can just be redesigned as if in some video game
Apart from anything culture is not something that can simply be constructed. It is something we are thrown into and which we can only at best try to negotiate our relationship with. Culture necessarily involves other people and prior existing structures. Has Jenkins considered what it would mean if everyone felt free to 'construct their own culture'. Even if such a thing were possible, it is certainly not desirable, especially if we have any hope to produce a properly participatory culture.
Frankly as far as I am concerned SL is really just a kind of cultural pornography, and is to the real business of culture what masturbating is to sex with another person. I like masturbation as much as the next man, or indeed woman, but I don't make the error of mistaking for something it isn't. Apart from anything else it lacks precisely the element that sex has, that of involving a proper, embodied, responsibility to someone else and to the potential consequences of the act itself.
There are lots of misperceptions embedded in these comments. To start with, I was not suggesting that we should be concerned with SL to the exclusion of concern with the real world. But I do see the struggle to preserve participatory culture as a fundamental political struggle in the same way that the right to privacy or the efforts to defend free speech are foundational to any other kind of political change. We are at an important crossroads as a society: on the one hand, we have new tools and social structures emerging that allow a broader segment of the population than ever before to participate in the core debates of our time. These tools have enormous potential to be used for creative and civic purposes. On the other hand, we are seeing all kinds of struggles to suppress our rights to deploy these new tools and social structures. Even as we are seeing a real promise of expanding free speech, we are seeing real threats to free speech from both corporate and governmental sources. We should be working to broaden access to the technologies and to the skills and education needed to become a full participant rather than having to defend the new communication infrastructure against various threats from government and business.
Gere understands what's going on in Second Life primarily in individualistic rather than collaborative terms. It would indeed be meaningless to describe a world where everyone constructs their own culture. Culture by definition is shared. But it is not absurd to imagine a world where everyone contributes to the construction of their culture. It is not absurd to imagine different projects in SL as representing alternative models for how our culture might work. Indeed, the virtual world allows us not only to propose models but to test them by inviting others inside and letting them consider what it might feel like to live in this other kind of social institutions. I think of what goes on there as a kind of embodied theory. And I think what is interesting is that these are intersubjective models that are indeed being taking up and tested by communities large and small.
In each of the examples I cited above, participants are learning how to work together with others through the creation of a shared virtual reality. We certainly need to spend more time exploring how we can connect what happens in these worlds back to our everyday lives but that doesn't mean that what occurs in a symbolic space is devoid of a real world social and political context.
Often, real world institutions and practices constrain our ability to act upon the world by impoverishing our ability to imagine viable alternatives. This is at the heart of much of the writing in cultural studies on ideology and hegemony. SL offers us a way to construct alternative models of the world and then step inside them and experience what it might feel like to live in a different social order. I think there are some very real possibilities there for political transformation.
iDC -- mailing list of the Institute for Distributed Creativity iDC[at]mailman.thing.net http://mailman.thing.net/cgi-bin/mailman/listinfo/idc
iDC Photo Stream:
March 08, 2007
A Mobile Radicals Project
If computer applications are to become pervasive then they must become part of the everyday fabric of our lives and will provide users with the ability to interact with objects and places within both the real and virtual worlds. One trait of human behaviour in the interaction with objects appears to be an inherent passion for leaving our mark on these objects. In our current society, this is most readily evident through graffiti spray-painted in public places. To some, this is urban art reflecting the communities in which it resides, whilst to many it is an act of vandalism.
SprayCan graffiti divides communities and generations in terms of how it should be dealt with in terms of either complete acceptance or punitive action. In MobSpray we have developed a system that tries to bridge the divide as it both provides writers with a means of tagging their environment, using mobile phones and RFID tags, whilst minimising the physical effects to the landscape for the communities where it resides.
Writers’ tags are deposited, and collected, from a database operating on a central server using a GPRS connection initiated by a Java application on a Nokia 5140 with an in-built RFID reader. These tags are currently displayed on the writers phone but the system is being evolved so that they can be projected on walls in urban landscapes.
Second Life web 2.0 tools
Twitter and Sloog
The Beautiful Simplicity of Twitter (and BlogHUD): Second Life resident Koz Farina, creator of the very popular BlogHUD tool for blogging from within SL, is developing a system to allow you to cross-post your BlogHUD posts to your account on Twitter, the hot new social site that lets you miniblog along with your friends. Koz is already feeding all BlogHUD posts to a Twitter BlogHUD page. This is just the latest entry into a growing pool of Twitter-to-SL mashups from people like Ordinal Malaprop and Kisa Naumova, among others. And in fact, there’s been a huge flowering of ancillary Twitter apps since the service launched last fall. Why? Because Twitter is incredibly compelling, for a number of reasons. One of the most important, in my opinion, is the almost complete lack of button-based features that Twitter offers to its users. (Although I’d love for someone to build the wish app described at the end of this post.) More » [posted by Mark Wallace on 3pointD]
Sloog Gets its Tags Up in Second Life: I’m going to go ahead and call Sloog the best Web 2.0-style site for the virtual world of Second Life that I’ve seen yet. Sloog lets you tag locations in Second Life, storing them in the Sloog system so you can access them later on the Web. It’s a bit like del.icio.us tagging or Digg-ing virtual places, and it’s not a new function for SL users, but this is the best implementation of it I’ve seen yet. More » [posted by Mark Wallace on 3pointD]
Independent Robotic Community
Social Network for Robots and Humans
"In a special edition of El Pais Digital reporting the Arco festival in 2001, the Spanish artist Ricardo Iglesias, one of the net art pioneers in his country, said that nothing or almost nothing ("only cliché") can be said about net art in five or six lines. However he mentioned a series of isolated topics that interconnected, related to each other, that'd really express net art: the network, the global and decentralized communication, interactivity, the virtual spaces, telepresence, the chaos theory, the active and interactive principle, the telematic interfaces, the post-biological culture, the hypertextual perspective, on line-chat, the rhizome image and the state of uncertainty.
His latest artwork, Independent Robotic Community, is a sum of all of the above. Developed with Gerald Kogler the project focuses on new forms of interaction between robots and humans on two levels.The first one features a community of 20 small robots divided in two groups. Each group has a primary level of socialization and a series of sounds conforming with a single vocabulary. Each robot's initial state consists of a very simple movement within a delimitated spatial environment. When it comes across other robots, it swaps data about its state with sounds and increases its degree of socialization and the complexity of its movements. On a second level humans can interact with robots using mobile phones and Internet, communicating with them in real time and further influencing the socialization process within the independent groups. The result is an original visual representation of a social network that includes both the subjective point of view of a spy camera and the graphic display of social statistics. As to confirm that net art is the art of networking." Valentina Culatti, Neural.
March 07, 2007
Mixed Reality Science in Second Life
[...] Matt Bidulph is assessing Second Life for Nature Magazine, in deliciously empirical ways. Physical microbiology is transformed into virtual macrobiology as Matt renders individual cells in Second Life. The original data is taken from four-nanometer scans of the real cells. Bidulph works on making constructs of these cells that are large enough to walk around in and examine in detail. He also imports a quakeworld model or two for good measure. Matt Biddulph shows off his work in a properly buzzword compliant presentation at the Google Code Jam. VRML always gets a laugh. Take two minutes to watch the video. Extended blog post on Biddulph's SL project here [...] [blogged by W. James Au on New World Notes] Also see Mediamatic review Building Bacteria in Second Life.
March 06, 2007
Word Finder, Google Earth Edition
1st Life / 2nd Life Game
Julian Bleecker writes: This is a 1st Life / 2nd Life game that is based on paper-based word-finder games. In this game, you try to identify dictionary words within a grid of letters and circle them. In this variation, the landscape becomes the game grid and you have to walk (or run!) to generate a virtual line that runs through the letters in order to "capture" the word and score points. A GPS is used to help the team identify where the letters are in 1st Life space, and to "capture" the words composed of those letters by recording "track logs" through them. The GPS is able to record where you are in 1st Life, and to create "tracks" of locations, which are essentially connected points that make up lines on the GPS. These track logs can be seen as a way of geospatial drawing — the tracks you make delineate lines which can compose various forms of drawing. In our case, the lines are meant to create lines through the letters that make up the words. (Note that in the more traditional word finder games one circles the letters — that would be a bit tricky in this variation, so we'll just draw an approximate line _through_ the letters. Continue reading >>
DualTerm :: Second Life
An Immersive Airport Experience
DualTerm -- by ERIK ADIGARD + CHRIS SALTER -- is an online installation that uses the platform and interaction framework of the networked 3-D virtual world Second Life. Exploring two contrasting forms of immersion (noise versus silence, the intensity of data versus the experience of emptiness) that mark the contemporary airport experience, Dual + Terminal presents a visual and aural walk thru 3-D environment that alternates the visitor between the built, physical space of the new Moshe Safdie terminal at Pearson International and an exact 3-D simulacra running in Second Life on the Terminal 01 kiosk.
The project situates itself in the increasingly complex territory currently being mapped between physical reality and its simulacra within 3-D simulations, gaming environments, meta worlds and other data-driven representations of built space. More specifically, DualTerm aims to explore an unavoidable tension that the modern traveler is increasingly caught within the context/site of the contemporary airport: the desire to be immersed in the public space of information and noise overload versus the craving for a private, interior state of suspension, contemplation, groundlessness and stillness before the act of flying.
The dramaturgy of the installation involves an interactive, 3-D walk thru of the Pearson Terminal by a visitor controlled avatar which makes experiential the contradictions of information saturation versus the desire for emptiness and stillness. The visitor's Second Life avatar can thus wander in a 3-D simulation of the actual terminal, entering into two potential experiential zones: (1) a data space in which real time data, noise, CNN broadcasts, LED advertisements are all texture mapped onto the existing features of the terminal animation (floor, walls, ceiling, furniture, visitors) creating a frenzied and overwhelming environment of noise, color, sound, image and information glut and (2) a void space contained within the data cube in which the visitor/avatar confronts an environment no longer determined by external flows of data but by the their own internal experience that occurs in the threshold between arrival in the terminal and the taking off from earth. Dual Term thus oscillates between two alternating intensities: the seduction of data saturation as it is generated by the airport and a feeling of interior suspension that can evoke the conditions of flying.
February 27, 2007
RL >><< SL
Sousreality -- by Drew Harry, Dietmar Offenhuber and Orkan Telhan at the MIT Media Lab -- is a pair of connected spaces linked through the metaphor of a crystal ball. In the SIGGRAPH gallery is a crystal ball that peers into Second Life, and in Second Life there is a crystal sphere containing a real-time inside-out vision of the gallery. The images of the spaces align visually and are also connected by audio.
The metaphor of the crystal ball informs this piece in several ways. Crystal balls are a tool for seeing into an alternate reality, a reality that maybe has something to do with the future, or may simply be created by our own minds. This vision is available to anyone, but requires the interpretation of a mystic. Our crystal ball functions the same way. It requires our presence and manipulation to keep the connection clear and open. We see our role as technologists similar to the role we play in this piece - we are expected to interpret the lessons of Second Life to predict the future of virtual worlds but often feel no more skilled at this task than mystics attempting to interpret visions in a crystal ball.
Through the larger form of the piece we address popular interpretations of Second Life. Virtual worlds are frequently described as subsidiary to the “real” world. We emphasize this by presenting Second Life as a world within a world, placing visitors to the gallery in a perceived position of power over the tiny avatars in the ball.
The audience is invited to gaze into the crystal ball and ask questions about the virtual world. The residents will happily answer, for a small fee, and we'll do our best to help them understand what they have to say and see what we can learn about the future in the process.
Status: Sousreality is work in progress - the concept described on this page doesn't exist quite yet. What you see in the video is a mockup. We'll probably be building the piece for real over the next 6 months, depending on a variety of things. If you want a tour of current mockup, feel free to email me (address below) or message me in SL at Zetetic Aubret. The land the mockup is on is shared with lots of other people who may not want lots of visitors, so we're hesitant to post a slurl right now. We'll update this page when we have more to show!
Acknowledgments: We'd like to thank Judith Donath and Aaron Zinman for helping develop the concept. We'd also like to thank all the Second Life residents who helped us make the movie and screenshot - Jaydiem Eun, Toto Donat, JC Fremont, Maximilian Haas, Squee Janitor, Mikiko Miasma, Abby Okey, and Karolwojtyla Sobocinski.
Contact: If you have any questions or comments, I can be reached at dharry at media . mit . edu. Read an interview >>
February 23, 2007
Experimental Gameplay: Toward a Massively Popular Scientific Practice
Massively multi-citizen science is almost here
Can a game developer be nominated for a Nobel Prize in one of the sciences by the year 2032? That's my plan, which I presented this past weekend at the annual meeting of the American Association for the Advancement of Science. You can download the slides from my talk, or read the related research paper (hot off the press!), or peruse some related links, on my AAAS webpage here. (Or see what Newsday took away from it here.)
My goal over the next decade is to support the development of a massively multi-citizen science through massively collaborative games (think: alternate reality games with real-world data embedded inside.) So in the near future, when the most creative, collective-intelligence gamers are grinding away 10, 20, 30, or more hours a week, they're grinding on real scientific research problems wrapped inside a yummy fictive or fantasy shell.
Yes, I am calling for a truly popular scientific research practice that engages the global public in hands-on, brains-on collaboration, via sites Citizen Science and Amazon's Mechanical Turk and through immersive, story-driven play. Amateur participation + a creative commons for science literature + the stickiness of a well-designed game and well-told story = radically interdisciplinary mash-ups accessible to lay people and productive of real scientific insight.
Sound crazy? No way. This is seriously possible, and plausible. Here's three reasons why:
1) Science practice itself is increasingly leaning toward a kind of collective intelligence, amateur participation. You can read about it in the incredible Institute for the Future report: Delta Scan: The Future of Science and Technology, 2005-2055.
2) Meanwhile, there is no doubt -- as I argue in my new 50-page case study for the MacArthur foundation -- that alternate reality gamers are doing real CI investigations that would fully prepare them for real-world collaborative research. Their gameplay is already fundamentally a CI scientific effort to undertand fake (fictive) data. I'm just proposing that we shove some real scientific data in there, while they're at it.
3) And perhaps most importantly, as Sean Stewart - the original and most esteemed alternate reality storyteller around - has famously said: "I do NOT assert that [alternate reality gaming] is the first, or greatest, example of massively multi-player collaborative investigation and problem solving. Science, as a social activity promoted by the Royal Society of Newton's day and persisting to this moment, has a long head start and a damn fine track record.... We just accidentally re-invented Science as pop culture entertainment."
So, yes, If this sounds interesting, get the slides. And here are a couple of other sites to get you thinking: "Fostering Scientific Habits of Mind in the Context of Online Play" and MacArthur Spotlight on Digital Media & Learning.
If you want to propose a data set, scientific problem, or research focus for a massively multi-citizen science game, or if you want to be notified when there's such a game to be played, email me at jane @ thenameofthisblog dot com. [bblogged by Jane on Avant Game]
February 19, 2007
In-site Montréal: Curator’s statement by Michelle Kasprzak :: In-Site Montréal is a collection of site-specific art presented on the portal pages of five wireless internet (Wi-Fi) hotspots in Montréal. Artists Nicolas Fleming, Maria Legault, and Virginie Laganière have created artworks that may be viewed when users of the free service provided by Île Sans Fil log in to their accounts at the selected hotspots.
The five hotspots are rooted in specific spaces, each one with its own unique properties. The In-Site Montréal project grew out of a desire to augment the experience of place for Wi-Fi users, offering an additional layer of information within the hotspot environments. The artworks that are presented on the portal pages are inventive responses to the characteristics of the spaces that the hotspots inhabit.
“The window appears to look out onto a dataspace that continues beyond the borders of the window itself. [...] But the illusion quickly wears off. The window starts to feel more two-dimensional, more like a piece of paper than a portal. The view-space appears to flatten out, to the point where the window and the data contained within the window merge.”i
Here Steven Johnson is describing the effects of using a scrolling window on a computer screen for the first time, and I am referring to it (ever so slightly out of context) to illustrate a point about the works that are being presented within In-Site Montréal. The users of the Île Sans Fil wireless network are, arguably, all hardened internet users, for whom the complexities of scrolling windows and portals and most other graphical user interface-related things are trivial.
However, since they have reached the secondary stage that Johnson refers to, where the “window and the data contained within the window merge”, there are certain expectations for an experience that can keep pace with their ability to leap from hyperlink to hyperlink.
Portals, by and large, are clumsy. The portal that occasionally pops up on my screen, which is associated with my Hotmail account, assumes I am interested in all manner of celebrity gossip and sports scores, and regional news for an area that is 45 miles to the west of where I currently live. But the works presented as part of In-Site Montréal are not attempting to form part of a portal experience that would guess the preferences of each user. The works are dealing directly with the particularities of the site where the hotspot is, which is a small enough area to be clearly defined as a common element in each user’s experience. For the elite users, something at last may jump out at them from this flattened dataspace where things feel as twodimensional and familiar as a piece of paper. Instead of the usual hurried clicking to get past a familiar “roadblock” and get to the destination they intended to go to, they may now feel that the artists of In-Site Montréal have added an observation on their local café, library, or artist-run centre that matters, that they can respond to, that strikes them out of their reverie.
“A provisional conclusion might be that in advanced art practices of the past thirty years the operative definition of the site has been transformed from a physical location —grounded, fixed, actual— to a discursive vector—ungrounded, fluid, virtual.”ii
Parts of this definition of site – fluid, virtual – are key concepts that that In-Site Montréal works with. The layer of information that floats on top, as a meta- layer to the usual experience of café users in the Île Sans Fil network is meant to be something a bit fluid, virtual and unexpected. The only definition that it does not fit is that of “ungrounded”, precisely because it is the grounding in the site that sets this project apart. Maria Legault’s interventions with her Free Sugar project may be considered particularly grounded in the sites in question. She worked with two locations, Studio XX and Café Utopik, and developed an extension of her Free Sugar project around both locations. At Studio XX, a feminist art centre that primarily consists of an office space and computer lab, she created a performance event entitled the Free Sugar Salon, that was open for anyone to attend and have the holes in their lives filled with pink pudding. She filled cracks in the architecture of Studio XX with pink icing, and then turned her attention to the attentive public that arrived at the studio, counseling them and filling their mouths with pink pudding to console them. At Café Utopik, a café/bar that regularly hosts bands and spoken word events, she conducted a surreptitious intervention, filling crevices and holes in the architecture and surrounding environment of the Café with pink icing, and documenting it in photographs. Both of these projects are presented on the portal pages of Studio XX and Café Utopik as video documentation of these actions.
Artist Virginie Laganière focused on two very different areas: the Jean-Talon Market and the area around the popular meeting place, Café Utopik. Her site-specific video pieces were shot with regular video cameras, as well as custom camera rigs attached to her body. She then manipulated the footage further in the editing suite, adding her own compositions as soundtracks and prolonging moments that happened oncamera, providing us a moment to reflect on their significance. She specifically chose to document moments where people were not as present in these spaces, and where the patterns of movement in the “off-peak” hours would become more apparent.
Through her augmentations in the editing suite, she also aims to create a piece of work that allows us to see beyond our usual clouded and harried view of the urban environment, and enjoy a view of the built environment that is tranquil, constructed, and part of an aesthetic experience. In particular, her video piece presented on the portal page of the Jean-Talon Market, usually a place so buzzing with activity as to be nearly impossible to navigate, was shot in the very early hours of the morning, when market stall owners are setting up. This meditative and slow period of the Market’s activity is hidden from most of the Market’s patrons, and Virginie’s artful editing brings out the poetry in the stasis of these moments.
Nicolas Fleming's performance art videos also present us with an alternate view of our public spaces. His work is presented at Café Kafeïn and Laïka, because of both the subject matter that he chose and the locations that he performed in. At Laïka, an extremely popular and hip bar/restaurant/club, he presents se traîner, a piece wherein he drags himself out of his apartment (which is within the same building complex that Laïka is in) and down the stairs to an escape portal – an automatic garage door. Throughout the performance he can be heard grunting with the strain of moving himself in such an unconventional way, and by the end of this performance, he is clearly exhausted. Users viewing this video must marvel about this strange and strenuous test to his body, that took place in relative secret behind the scenes of the Laïka’s festive décor. In the other piece, traîner un dj, Fleming travels to Île Sainte-Hélène to encase a dj in a canvas sac, and drag him along the pavement, with the sounds of Piknic Electronique (Montréal’s outdoor summer dance club) pounding in the background. This work is presented at Kafeïn due to the dj culture that is resonant there; the dragging of a dj must be somewhat humourous to the clientele.
Telematics is a term used to designate computer-mediated communications networking involving telephone, cable, and satellite links between geographically dispersed individuals and institutions that are interfaced to data-processing systems. It involves the technology of interaction among human beings and between the human mind and artificial systems of intelligence and perception. The individual user of networks is always potentially involved in a global net, and the world is always potentially in a state of interaction with the individual.iii
The virtual spaces that In-site Montréal inhabit are amorphous areas around several accepted gathering places such as cafés, galleries, markets, and bars. They are perhaps places where as an internet user, you may intend to use the opportunity of connectivity to the network to look outward, to read news of distant places or connect with friends far away through e-mails and online social networking sites. The art practice of telematics in particular addresses the creative possibilities when two parties are connected over distance to communicate. In some way, the pieces presented on the portal pages of Île Sans Fil’s network as part of the In-Site Montréal project present something that is almost anti-telematic, in that the works look inward rather than outward. In the case of this project, a connection to someone across the globe is not sought, it is shunned in favour of a further examination and rumination on the details of the local environment. A local resident, who is perhaps used to the culture at Café Utopik, may be best able to chuckle at the video of pink icing being added to the sign above the door. This intense inwardlooking that these pieces commit to is the essential point of the project. Instead of seeking to look outward and connect with others who are in a radically different geographic space, In-Site Montréal hopes to reconnect locals with their own space, through the language of culture, compelling users of the network to turn their gaze inward enough to consider the cultural resonances that are possible.
- Michelle Kasprzak, 2006/2007
i Steven Johnson "Interface Culture: How New Technology Transforms the
Way We Create and Communicate" 1997 Harper Collins, New York. Pg 86
ii Miwon Kwon, One Place After Another: Notes on Site Specificity, October 80 (Spring
iii Roy Ascott: “Is there love in the telematic embrace?”
February 14, 2007
Land to Google Earth
Hello, World! is a work of land art in the real world (here be exact) designed for the virtual world of the software Google Earth:
A Semacode measuring 160 x 160 meters was mown into a wheat field near the town of Ilmenau in the Land Thuringia (Germany). The code consists of 18 x 18 bright and dark squares producing decoded the phrase “Hello, world!”. The project was realized in May 2006 and photographs were taken of it during a picture flight in the following month.
For full photographic documentation of the work see the Flickr stream here. Note for examples of work connecting virtual and real in differing ways see the work of Aram Bartholl, particularly Plazemark and Map. [blogged by Garrett on Networked Research]
Networked Accessories and More
Interview with Martin Frey
Gizmodo Gallery: Networked Accessories and More with Martin Frey :: Interview/Article by Jonah Brucker-Cohen
In the rapidly changing world of consumer electronics and portable devices, a trend is occurring amongst artists and designers to embed social problems and concerns into these objects to shift their intended focus and intent. Taking this credo to an extreme with his work into technologically enhanced objects and accessories is German artist/designer Martin Frey. From creating a watch that keeps track of time and positioning data to guide you towards your destination with his "Just in Time Watch" to integrating GPS and motors into the soles of hiking boots with "Cab Boots", Frey is interested in the connections and clashes between physical realities and digital data displays. Gizmodo recently caught up with Frey to discuss his wide variety of projects and take on the future of interactivity and how designers of technologically enhanced consumer products often expect too much of an investment from their users.
WOW in real life!
German artist Aram Bartholl did a “World of Warcraft” performance in public space where he had his name follow him over his head. Watch the video above.. he also recently did more of these installations in Gent, Belgium at the Vooruit. See pics . I also did an interview with Bartholl for Gizmodo that can be read here. [blogged by Jonah on coin operated]
Virtualizing the Physical by Greg L: We've had several discussions in the past about comingling virtual world technologies with physical spaces to form augmented realities. (E.g. 1, 2, 3, 4) To give credit where it's due, Jerry Paffendorf has often chimed in with some great links and interesting comments on this topic. (E.g. 1, 2, 3) From time to time, we've also discussed the increasing technological viability of virtual-real mashup games like Human Pac-Man. Continue reading "Virtualizing the Physical" on Terra Nova
February 13, 2007
When 1st Life Meets 2nd Life:
The 1685 Pound Avatar and the 99 Ton Acre
A Second Life avatar produces 1,685 pounds of CO2? And an acre of real estate in Second Life produces 99 tons of CO2? What gives?
One of my pet research projects has been to find ways to establish effective, playful bridges between 1st life and 2nd life. What is 1st Life and what is 2nd Life? By 2nd life, I'm not referring to the virtual world run by Linden Labs. For me, "2nd life" is an evocative metaphor that counterpoints the normal, human, physical, material world, which I refer to as 1st life. I'll grant that the language lacks precision, but I'm relying a bit on my own assumptions, which I think are fairly well-shared, as to what counts as a digital, networked, social environment. 1st life is then the non-networked, non-digital social environment.
The presentation — titled When 1st Life Meets 2nd Life I gave this week at Lift07 started with a reminder as to the material basis of 2nd life. There is "stuff" that undergirds digital networks — indeed every digital bit owes its life to some sort of material. Atoms compose digital data. There's that stuff that we're never really aware of unless we spend time working in the data center facilities where all of the Internets take physical form. 2nd life and its digital networks are made of heavy material — copper cable, steel racks to hold servers, rubber or plastic insulated power cabling, cooling systems, human labor, billions and billions of integrated circuits and the effluvia of the toxic chemicals expelled during their production, shipping and decomposition, etc. Our participation in this materiality probably ends at about the time when we discard the cardboard box and styrofoam packing of the shipping material in which our new computers arrive.
But this is more than the William Mitchell bits & atoms thesis. It's not just the equivalence, but the precise nature of that material — what kind of stuff are we talking about? It's not just the composition of digital bits, but what physical material, in the use of digital bits, of digital networks, of our PC's, web and game servers, is produced.
Perhaps the most unsettling material characteristic of our 2nd lives is considering the resources necessary to maintain them. Whether emailing, googling, blogging, uploading videos, downloading music — everything — this owes a measurable and material debt to first life. When one computes the amount of power — normal 1st life electricity — that is consumed to maintain our 2nd lives it becomes clear what that debt is. Or when one computes how many tons of CO2 emissions result from the production of that electricity, assuming the majority of our power comes from plants that burn something that produces CO2 as a consequence of producing electricity.
I started running these numbers after reading a very interesting and thought-provoking discussion on Nicholas Carr's blog where he computed these figures based on some public figures he found pertaining to Linden Lab's Second Life environment. One could run the same numbers for any other digitally networked activity, like emailing or web surfing or whatever.
What's particularly appealing about choosing an online world like Second Life as my example is that it's underlying metaphor is 1st life. Email would be another good example, as postal mail requires energy that exhausts CO2 in its processing and delivery. But Second Life has more PR these days, so I'll use that as an example.
Second Life itself captures many of the important characteristics of 1st life and uses that to convey a sense of familiarity for the users. There is property in Second Life, waterways, buildings, etc. It's a 3D virtual world that is largely modeled on commonly held assumptions about what counts as 1st life. What it doesn't convey to its users is any kind of Second Life representation of the ecological cost of that Second Life world, which would be very cool — Second Life CO2 emissions, for instance, to correspond to equivalent estimates about how much CO2 is emitted in 1st life.
I was shocked at the numbers on Carr's blog, so I computed them myself to check the math. I revised some of his assumptions, so my figures are significantly more conservative than his. (I'd even go so far as to say that my figures are unrealistically low, because a more rigorous analysis would include estimates about the power consumption of the ancillary network devices between the user's computer and the data center.) I also culled from comments in Carr's blog post to refine some of the assumptions, especially the remarks from Second Life employees who have direct access to some of the power consumption figures.
There are a few additional assumptions I've made, mostly pertaining to what I think is a more realistic assumption as to how much power a typical home PC uses, and how often one might actually play Second Life.
Carr assumes that a home PC consumes 120 watts, which I think is much too low — I assume 300 watts, based on looking at the technical specifications of a mid-range Dell computer, and I also compute the power consumption of an LCD display. I also don't assume, as Carr does, that someone playing Second Life is playing 24 hours a day — I assume, averaged over a year, they will play eight hours per day. Some days they won't play, others they may invest 12 hours. I think 8 hours is a fair assumption.
I've also used the assumption that 1.35 pounds of CO2 is emitted per kWH of electricity produced.
A Linden Labs employee measured the power consumption of their servers and came up with the figure of 175 watts (energy per hour) with the server running at full-tilt. I assume that the servers basically run at full-tilt 24 hours a day, seven days a week, and that servers of this sort make demands on the data center for cooling, power distribution, ancillary resources like lighting, operations center energy costs, keeping the candy and pop machine running in the break room, etc., at an equivalent of 50% of their nominal energy use. So, a 175 watt server actually needs 175+87.5 watts of energy to function in a data center.
I came up with the following figures:
Power Consumption Per Avatar Per Year (Second Life Servers): 153 kWH
Power Consumption Per Avatar Per Year (Home User's PC): 1,095 kWH
Total Power Consumption Per Avatar Per Year: 1,248 kWH
CO2 Emissions Per Avatar Per Year (Second Life Servers): 207 lbs (94 kilos)
CO2 Emissions Per Avatar Per Year (Home User's PC): 1,478 lbs (670 kilos)
Total CO2 Emissions Per Avatar Per Year: 1,685 lbs (764 kilos)
Second Life is composed of regions that have a correspondence to normal 1st life acres. I've learned that there are 16 acres per region, and there are 4 regions per server, so there are a total of 64 Second Life acres per server. That means the power consumption per Second Life acre is 16.8 kWH, or 147,168 kWH per Second Life acre per 1st life year. And that means that 23 pounds of CO2 is produced and exhausted into the 1st life atmosphere per Second Life acre per hour, or 198,677 pounds (90,118 kilos) per Second Life acre per year.
Some equivalence for perspective:
In 2003, the per-capita power consumption in the United States: 13,242 kWH
In 2001, the per-capita power consumption in Iceland: 26,947 kWH
In 2001, the per-capita power consumption in Keyna: 118 kWH
(World Resource Institute, EarthTrends — http://earthtrends.wri.org)
Every year, every Avatar in Second Life produces CO2 emissions equivalent to a typical, honking, bloated, arrogant SUV driving 1,293 miles, based on the assumption that this kind of SUV generates 1 lbs of CO2 per mile.
If serving this page took the CPU on my modest web server .1 second to serve to you, it probably consumed .004 watt (assuming my clunker consumes 100 watts + 50 watts for overhead)..it works out to about 6 micrograms of CO2, not counting whatever your PC contributed to the production of CO2. Okay, I'm getting carried away, but you get the idea.
Why do I blog this? I find this kind of analysis fascinating and revealing. It is the kind of bridge between 1st life and 2nd life I am trying to build, where the semantic link between what goes on in our 2nd life worlds is made plain in its correspondence to 1st life. 2nd life activities are not the clean, sustainable, whole-earth friendly activities the Bay Area, Web 2.0 crew may think they are. Despite the important evolution of human social formations that have arisen, the messiness of the 1st life remains. Maybe there should be a little eco-meter on the dashboard of Second Life, World of Warcraft and, whatever — YouTube and my blog. I'd be interested in computing the same figures for World of Warcraft. I suspect they're probably equivalent, although I'd probably bump up the average hours of play per WoW character quite a bit. I would need to know the distribution of simultaneous characters per server, or number of servers per instance, as well as some sense as to how much power is consumed by whatever server they may use. It's about provoking some thinking about the material contingencies of our online activities. I won't really quibble about the numbers. Someone's going to want me to adjust something this way or that — the accuracy of the figure will be forever elusive, so I'm not interested in debating that, or tweaking some of the numbers. The point is — there is a debt paid for our online lives and we rarely think about it. How can we start to introduce the material aspects of this activity more directly? That is my goal here. [blogged by Julian on techkwondo]
February 12, 2007
Marginalia in the Library of Babel
Marginalia in the Library of Babel [in progress — alpha release]: "It starts with Borges. It always starts with Borges, the god of our hyperlinked souls. I fight the solitude of the vertigo he has imagined for me, and yet I may have finally found a way out of the labyrinth.
At night, I search for Borges, alone, hunched in the solitary chamber of an internet browser. If I was not seeking him, I’d be seeking myself. And I already know where I am and the speck size of my shadow on the Web."
Diigo Fiction: Marginalia in the Library of Babel: [This post offers context for a work-in-progress entitled Marginalia in the Library of Babel. alpha release.] Let us write stories in the margins of the Web: The web is becoming ours to write with. Whether supplying, ranking, or reviewing its contents or reordering the web with our folksonomic tagclouds, we are becoming the owners of more than just our Craig’s list and Ebay possessions. We are orchestrating this web and making of it what we will. Jeremy and Matt Kirschenbaum have reported on the moments when the tagclouds become art (see inset image made via TagCrowd.com). And now the web pages themselves have become our surfaces, our building blocks. Here’s how… More >> [blogged by Mark Marino on WRT]
January 09, 2007
Dune & Devil
Tracking Vienna + Tokyo through Dune+ Devil
Dune & Devil (2003-present) explores a space- and time-based phenomenon through communicational technoculture. The aggregation of «•» applies different structures like global positioning systems, audiovisual media, mobile technology, tele-communication tools and specialized software, to experience the stereotopographical synchronization of two individuals in different urban situations. We are trying to translate this spatial experiment under the condition of a unique geosocial application to translate this individual, cultural and technological impact in our DIY-habitat. The project produces a sociographic disposition of a mixed reality which can be observed through the interface of dune-n-devil.com.
We are using two handheld computers with additionally installed open source java virtual machines (btw, thanx to mr. freebeans/japan for preparing mysaifu!) on a windows mobile operating-system. A specially, for this project, self-developed java application, is used to interconnect the two mobile computers via UMTS/GPRS for communication and navigation. The incoming GPS-data is streamed to a mysql database which collects all data produced (both outputs: Dune/Vienna and Devil/Tokyo). At last, the database provides the flash application for the visualization of the project «•» on the website.
Public space is constantly ehanced with new infrastructures of communication such as GPS, wireless internet, telecommunication protocols, location based services, mediatectures and many more. This modification of reality happens through the new possibilities of information-technologie, interface-culture and interaction. The basic goal of the communication based sychronization-system «•» is tracking digital processes in two different cultural situations (vienna & tokyo) through two different observers (Dune & Devil), and to merge this into one sensation. We are trying to translate this spatial experiment under the condition of a social application in a creative process of constructing new ways in media and art. Dune & Devil will incribe themselves into these connected, synchronized, totally different realities. This change of perspective through individual selection, dependent on the attention and the cross-link of these two realities, explain how space andt time are related to the observer and the system of observing.
December 26, 2006
Where Real Money Meets Virtual Reality
The Jury Is Still Out
[Image: Veronica Brown is able to make a living selling her digital fashions in the online world Second Life. Photo Credit: Linden Lab] Where Real Money Meets Virtual Reality, The Jury Is Still Out :: By Alan Sipress, Washington Post Staff Writer, Tuesday, December 26, 2006; A01.
Veronica Brown is a hot fashion designer, making a living off the virtual lingerie and formalwear she sells inside the online fantasy world Second Life. She expects to have earned about $60,000 this year from people who buy her digital garments to outfit their animated self-images in this fast-growing virtual community.
But Brown got an unnerving reminder last month of how tenuous her livelihood is when a rogue software program that copies animated objects appeared in Second Life. Scared that their handiwork could be cloned and sold by others, Brown and her fellow shopkeepers launched a general strike and briefly closed the electronic storefronts where they peddle digital furniture, automobiles, hairdos and other virtual wares.
"It was fear, fear of your effort being stolen,'' said Brown, 44, whose online alter ego, Simone Stern, trades under the name Simone! Design.
Brown has reopened her boutique but remains uncomfortably aware that the issue of whether she owns what she makes -- a fundamental right underpinning nearly all businesses -- is unresolved.
As virtual worlds proliferate across the Web, software designers and lawyers are straining to define property rights in this emerging digital realm. The debate over these rights extends far beyond the early computer games that pioneered virtual reality into the new frontiers of commerce.
"Courts are trying to figure out how to apply laws from real life, which we've grown accustomed to, to the new world," said Greg Lastowka, a professor at Rutgers School of Law at Camden in New Jersey. "The law is struggling to keep up."
U.S. courts have heard several cases involving virtual-world property rights but have yet to set a clear precedent clarifying whether people own the electronic goods they make, buy or accumulate in Second Life and other online landscapes. Also unclear is whether people have any claim when their real-life property is depicted online, for instance in Microsoft's new three-dimensional renderings of actual real estate.
The debate is assuming greater urgency as commerce gains pace in virtual reality. In Second Life, where nearly 2 million people have signed up to create their own characters and socialize with other digital beings, the virtual economy is booming, with total transactions in November reaching the equivalent of $20 million. Second Life's creator, Linden Lab, allows members to exchange the electronic currency they accumulate online with real U.S. dollars. Last month, people converted about $3 million at the Lindex currency market.
Second Life's economy has been surging since Linden Lab made the unusual decision three years ago to grant users intellectual property rights for what they create with the Web site's free software tools. Thousands of people have created homes and businesses on virtual land leased from the site and are peddling virtual items as varied as yachts and ice cream.
Congress has taken note and is completing a study of whether income in the virtual economy, such as from the sale of gowns that Brown makes, should be taxed by the Internal Revenue Service. The Joint Economic Committee of Congress is expected to issue its findings early next year.
"There seems to be a lack of ground rules in an area that would have explosive growth in the next decade or two," said Christopher Frenze, the committee's executive director.
Though she grew up watching her mother at the sewing machine, learning the craft with each loving stitch of the family's clothes, Brown never considered making it a career until two years ago, when she entered Second Life. Within days, she studied up on the basic software skills and began designing virtual women's apparel from her home in Indiana. "When I design," she said, "I think about how the cloth falls and the sheen silk has compared to satin." She said she now spends 70 hours a week on her trade. Starting with four original outfits, she now offers 1,200 designs and has also moved into men's fashion.
But the rogue program, called a copybot, that appeared last month in Second Life underscored the need to clarify her property rights. After the attack, Linden Lab announced efforts to ban the program and encouraged users to report abuses. Some users argued that even stronger property protections were needed.
"I'm feeling uncomfortable," Brown admitted. "I'm safe for now, but it's very tentative."
Linden Lab made cyber-history when it gave Second Life users the intellectual property rights to their creations -- similar to the copyright real-world authors have to their writings. By contrast, most Web sites offering virtual experiences have not accorded users any property rights, requiring them to accept a license agreement stating that all content belongs solely to the Web site owner.
Four years ago, several online gaming veterans tried to get around this agreement and make real money by selling game items from Dark Age of Camelot on eBay and at specialty online auctions. The items, which included weapons, armor and specialized characters, in some cases went for more than $300 each. The developers of the Camelot game blocked them. When the gaming veterans sued, claiming that they had rights to the items they acquired in the game, a federal court in California ruled against them on the grounds that the license agreement took precedence. Other recent U.S. court rulings in virtual disputes have come to similar conclusions.
But judges elsewhere have taken a different view. A Chinese player in the Korean-made online game Mir 3 claimed that his personal rights had been violated when the game's local Chinese operators deleted the magic sword he used to battle virtual villains. The operators claimed it had been illegally duplicated from an original. The player filed suit, contending that he had bought the magic sword in good faith and that it was worth about $120. A Chinese court in Xuhui district ruled against the game's operators, essentially finding that the player's property rights were paramount.
In Second Life, Linden Lab executives wanted to avoid this confusion, believing that users needed clear ownership for economic activity to thrive, recounted Cory Ondrejka, chief technical officer. Otherwise, users would have little incentive to invest.
But he stressed that this ownership did not extend to full property rights -- creators have intellectual property rights to the software patterns used in making virtual objects but no rights to the objects themselves. Under this formulation, Brown owns her designs but not the individual dresses and pieces of underwear. Nor do her customers "own" the apparel they purchase and hang in their virtual closets.
"Everything in the virtual world is intellectual property, as much as it looks like property or as much as property is a useful metaphor,'' Ondrejka said. "Copying it is not theft. It's infringement, but it's not theft.''
But Joshua Fairfield, a professor at Indiana University School of Law, said there's more to online rights than just intellectual property. He said there are legal reasons to believe that property rights to objects can exist in a virtual realm, but no U.S. court has affirmed the concept.
Earlier this month, U.S. Circuit Judge Richard A. Posner visited Second Life, appearing as a balding, bespectacled cartoon rendering of himself, and addressed a crowd of other animated characters on a range of legal issues, including property rights in virtual reality. Posner stressed that it was in Linden Lab's interest to ensure due process and other rights.
"They want people to invest in Second Life, and we know people won't invest if their rights are not reasonably secure," he told the audience, which included a giant chipmunk and several supermodels. He went on to predict the eventual emergence of an "international law of virtual worlds" similar to international maritime law.
Meanwhile, as mapping technologies rapidly improve, companies are increasingly able to transfer the real world to the online world. But are property rights any clearer in such a "real" virtual world?
Microsoft, for instance, launched an online service last month called Virtual Earth that features highly detailed three-dimensional photographic maps of American cities. Microsoft plans to make money by selling advertising billboards in this virtual depiction of urban America.
But the company's lawyers and advertising executives are still grappling with the question of whether those who own the property depicted in Microsoft's 3-D images have any control over how their depicted property is used online. For instance, does Federal Express have the right to object if an ad for its competitor DHL is posted in the parking lot at virtual FedEx Field?
"We haven't fully delineated all the guidelines for do's and don'ts,'' said Bobby Figueroa, a director of product management at Microsoft.
December 22, 2006
New World Notes
Gwyneth Llewelyn and the Kuurian Expedition
"...The analogy of “TV vs. theater” was thrown in for discussion— WoW and Everquest are more like TV, where the content is fed to the consumers. Second Life is more like a theatre, especially the ones presenting “interactive performances” where actors meet the public and there is a thin line separating both. Poinky comments: “It's a lot easier to watch TV thanto go a theatre.” This would explain the incredible commercial successes of WoW or EQ.
Interestingly enough, they tend to agree that you need some creativity to be able to enjoy SL.
Grace: “If you are not open to thinking creatively, SL might not be for you.” Frank and Poinky: “What I believe [is]we should define what is creative and what is naive creativity... Here everyone makes different fashions or architectures, by just right-clicking on the target, [hitting] a few buttons, and then move on, but it does not mean they are all interesting because they are original.” ..." From Gwyneth Llewelyn and the Kuurian Expedition by W. James Au.
Makes Upgrade! History
Thanks to John (Craig) Freeman and Eric Gordon, the Thursday, December 14, 2006 gathering of Upgrade! Boston was broadcast live to Second Life. This marks the first broadcast of an Upgrade! event in Second Life. More images can be found here.
Audience members in both spaces conversed with Brooke A. Knight, Mobius Artists Group (Larry Johnson, David Miller, Margaret Bellafiora, Tom Plsek and Lewis Gesner), and John Snavely about each of their works currently being presented by Turbulence.org at Art Interactive. Their respective projects, Cell Tagging, Variations VII: FirshNet and WhoWhatWhenAir were commissioned as the result of Turbulence's New England Inititiave II.
December 12, 2006
MobZombies explores a new dimension of handheld gaming by adding motion awareness to classic arcade style gameplay. Using a handheld device, and a custom motion sensor, players enter a virtual world infested with pixel-art zombies (a homage to vintage 8-bit console games). The goal of the game is to stay alive, running away from or planting bombs to destroy the ever-encroaching zombies.
The twist is that a player's physical position controls the position of their zombie-world avatar, forcing the player to actually move around the real world to succeed in the game.
The virtual zombie-world is a simple environment -- the game's complexity comes from players having to negotiate real-world objects in order to avoid the zombies and stay alive. The scoring system is simple: the longer you can stay alive, the higher your score. Of course, the longer you stick around, the more zombies you'll encounter.
"Basically, the game uses a digital compass and an accelerometer fastened to your hip (transmitting data through bluetooth) to get a relative position. That position controls a little zombie character in a virtual game world, and your main objective is to basically run away from zombies - by really running (or walking...). You can also drop bombs, but the trick is that you have to get away from the drop zone quickly before the bomb explodes on you. Other than zombies, the virtual world is really lacking any other obstacles - again, the idea is that the barriers are all brought in from the physical world. For example, in the game world, you might see a clear path away from the zombies, but in the physical world, that path might contain a busy street, or a wall. So that's where a lot of the difficultly comes in - learning how to navigate both these worlds and be hyper aware of your position in each.
The game has some history - I started designing it as a research project at USC, and it was entirely GPS based. However, we really found that it was a totally different experience than what we wanted, because of GPS resolution issues, as well as the ~1 second refresh rate. So my friend aaron and I built out an entirely different version based on relative movement. Now that we've got this version up and running, we're thinking about ways to get GPS involved again, while still retaining the analog control feel. One of the ideas we've been toying with is having community-assisted item pickups. In the current game, we randomly spawn heath and bomb pickups nearby your character. We were thinking it would be cool if people on the web could drop item pickups for you by clicking on a map. We could place those items in the virtual world based on a gps position, but once it was in the world, the player could still use the same control mechanism to get to the item. So yeah, basically thinking of creative ways to use GPS and maintain the nice analog feel of the human-as-joystick thing." -- Will Carter [via]
November 10, 2006
LAYING DOWN ROOTS
[image: Frobozz and Glitter] Netroots move into the metaverse. Inspired by BarCamp, the ad-hoc open conference system, RootsCamp merges that format with Net-driven political activism with a left-of-center bent. Perfectly timed to launch in the wake of the mid-term elections, the group will hold regular events in Second Life every day for the next week, starting tomorrow at 1pm SLT. Unsurprisingly, they'll start with a de-brief on the elections which just brought the Democrats back in control of Congress.
It's another example of metaverse-based political activism, following a path laid by Governor Mark Warner, and others before him. "RootsCampSL is an effort to allow people that can't make it to a physical offline RootsCamp," co-host Drew Frobozz explains, "to still be able to participate in one, and also to get the progressive Netroots up to speed with Second Life-- which we believe will be an important political organizing platform for the 2008 election." I've previously speculated on Second Life's potential to become a virtual meet-up platform, but given the real life activist background of Drew Frobozz (here) and his co-organizer Ruby Glitter (here), I'm inclined to take that statement far more seriously, coming from them.
October 12, 2006
Immersive Mixed Reality Environments
Saturday 21 October 2006, Erasmus Medical Center, Rotterdam :: 11.00 – 12.30: Bioinformatics dept., Faculty building, 15th floor 12.30 – 18.00: Sophia Children’s Hospital, Cinema 3rd floor.
Test_Lab:/ Immersive Mixed Reality Environments/ is the product of a unique collaboration between the Erasmus Medical Centre and V2_, Institute for the Unstable Media with the aim of opening the dialogue between scientists and artists that apply Virtual Reality in their research and art practice. The event consists of demonstrations by Virtual Reality artists and scientists providing hands-on experiences with Immersive Mixed Reality Environments, and presentations by renowned international speakers presenting the latest in Virtual Reality in science and art. See below for the program details, a description of the projects that will be demonstrated, and the invited speakers that will present their work in the seminar.
Test_Lab is a bi-monthly public event hosted by V2_ that provides an informal setting to demonstrate, test, present, and/or discuss artistic research and development (aRt&D).
The event is free of charge, but registration is required before the 19th of October. For further information and registration please contact Remco Beeskow at firstname.lastname@example.org (tel: +31 (0)10 206 72 72) or Fred Balvert at f.balvert[at]erasmusmc.nl (tel: +31(0)6 41431721). Also visit www.v2.nl and www.erasmusmc.nl.
11.00h - 12.30h: Demonstrations:
- Sonia Cillari (/GATC/Life/)
- Anton Koning (Clinical and research demonstrations)
- Artem Baguinski (/Deep Limb Sensation/ by Pierre Proske)
12.30h - 12.50h: Reception
12.50h – 13.00h: Opening by Anne Nigten (Manager, V2_Lab)
13.00h - 13.45h: Peter van der Spek (Erasmus MC)
13.45h – 14.30h: Derek Hales (University of Huddersfield)
14.30h - 15.00h: Coffee break
15.00h – 15.45h: Yike Guo (Imperial College London / CEO InforSense
15.45h – 16.30h: Maurice Benayon (Création Interactive
16.30h – 17.15h: Discussion
17.15h – 18.00h: Drinks
(Saturday, 21-10-2006, 11.00h-12.30h,
Erasmus MC Bio-informatics dept., Faculty building 15^th floor)
Artist Sonia Cillari will demonstrate a project called /GATC / life/, which she realised as an artist-in-residence at V2_ and in collaboration with Erasmus Medical Centre’s Bio-informatics department.
/GATC / life/ is an immersive experience inside unities of life. The liquid multi-colored breathing membrane (the 'inside') contains flocks of cells and smallest particles/molecules having disordered human-like noisy behaviours. Our presence and actions can influence their configurations and growth within the space and around/near us. This work explores our perception in a condition of extended body dimensions and lack of control.
- *Biomedical research*
Anton Koning will demonstrate applications for clinical and fundamental
- *Deep Limb Sensation*
Developer Artem Baguinski will demonstrate an art project by artist Pierre Proske called /Deep Limb Sensation/. The project is the result of an artist-in-residence period at V2_ funded by the Pauwhof Fonds and developed in collaboration with Erasmus Medical Centre’s Bio-informatics department.
In augmented- and virtual-reality environments one is often represented by avatars or virtual bodies/limbs. Deep Limb Sensation exploits this disconnection between our physical bodies and our bodies as we conceive them by having a person use their real arm to activate a "phantom" limb in a virtual world. The context in which this game-play takes place is a deep sea immersive underwater environment, in which a limb is controlled with the aim of touching another creature to accomplish a gentle act of virtual invertebrate love-making.
Sophia Children’s Hospital, Cinema 3rd floor)
*Peter van der Spek *is professor at Erasmus MC University Medical Center Rotterdam and head of the Department of Bioinformatics. In a world awash with data there is an increasing need for effective methods of gaining insight into associated and underlying information. We need methods for visualizing information to support rapid learning and accurate decision making. The talk will emphasize on various aspects of information visualization using real world examples besides lifesciences and medical applications of computer generated interactive information visualization methods. Information visualization deals with the representation and presentation of concepts and data in a meaningful way. Depending on the medium used, information can be visualized in either traditional static form or more recent and powerful interactive and dynamic using virtual reality technology.
*Derek Hales *is Research Leader for the Department of Creative Technologies at the University of Huddersfield School of Art & Design, where he also leads the Multimedia subject area. He is a chartered Architect and a regional councillor of the Royal Institute of British Architects, chairing their Digital Futures Group. In the /Immersive Mixed Reality Environments/ seminar, Derek Hales will present his current research on exploring virtual environments and the impact of such environments on contemporary practice.**
*Yike Guo *is professor at Imperial College, University of London and head of the Data Mining Group at Imperial College. He is a world leading expert in large scale data mining and Grid computing. He will introduce DiscoveryNet, an e-science pilot project, aiming to provide a uniform framework for enabling a global collaboration for scientists in their data-driven scientific research. This project achieved its goal and has been successfully commercialized by InforSense limited, an Imperial College spin-out company. One key impact of the DiscoveryNet technology is that it establishes a new model for scientific collaboration as well as knowledge dissemination.
*Maurice Benayoun* is a media artist whose work explores the potentiality of various media from video to virtual reality, Web and wireless art, public space large scale art installations and interactive exhibitions. Benayoun's work has been widely exhibited all over the world and received numerous international awards and prizes. For the /Immersive Mixed Reality Environments/ seminar he will focus on the concept of ‘situation’ applied to art. Based on his 15 years experience in creating VR-based works, Maurice Benayoun will address topics such as: Experiencing representation through VR in metaphorical environments (Does this introduce the metaphysics of games or the mechanics of human dialogue?), semantic spaces as information to live in, the bible of the VR World: something to read by visiting, virtual architecture as being-prints, the foot prints of our experience in the symbolic world, and much more.
*Anne Nigten *is the manager of V2_Lab, the aRt&D department of V2_, Institute for the Unstable Media in Rotterdam, the Netherlands. She lectures on research and development in interdisciplinary fields from an art perspective and is advisor for several media art and science initiatives in Europe. Anne will moderate the /Immersive Mixed Reality Environments/ seminar and will use her experience in interdisciplinary debates to provoke a dialogue between the presenting scientists and artists, and the audience.
For more information please contact Remco Beeskow, V2_Communications press[at]v2.nl. Phone: +31(0)10-206 72 72. Website:
September 15, 2006
Virtual Spaces are the New Public Spaces
In less than a month Gaming Realities, one of the biggest festivals to date encompassing the critical exploration of videogames in the fine arts and humanties, kicks off in Athens. The festival hosts a great line up of artists and speakers, many of which we've covered or archived here at Selectparks over the years (Blast Theory, Sir Frasca, Darwinia, Eastwood Group, Molleindustria and many more). I'll be there, so if you're at the festival come to my keynote and ask some impossibly difficult questions when I least expect it. Cheers! Read on for the comprehensive press release and exhbition details. [posted by julian on selectparks]
EXHIBITION: NetODrom by Christoph Anthes, Alexander Wilhelm, Helmut Bressler, Roland Landershamer, Johannes Zarl, Silke Wiesinger; Austria, 2005 :: Can you see me now? by Blast Theory; U.K., 2001 :: Philosopher Death Match by Benjamin Chang; U.S.A., 2006 :: Grid Chase - The 5€ Dance Pad Project by Dimi Christopoulos; Greece, 2002-2006 :: Himalaya’s Head by DEVART; Netherlands, 2005 :: Civilization IV – Age of Empire by Eastwood Group; Serbia, 2004 :: N o w h e r e - ein welt raum spiel by Sylvia Eckermann, Gerald Nestler, Christof Cargnelli, Oliver Irschitz; Austria, 2005 :: Bordergames by Fiambrera; Spain, 2005 :: Postvinyl by Mathias Fuchs; U.K., 2005 :: Lykno by David Gauthier, Henri Marino, Laurie Prevot, Jean Batiste Spieser; France, 2006 :: Semiomorph by Troy Innocent; Australia, 2001 :: Darwinia Introversion Software, U.K., 2005 :: Ready Played by The Ludic Society (Margarete Jahrmann/Max Moswitzer); Austria/Switzerland, 2006 :: Kalamiotou_02 by The mamayans; Greece, 2006 ::
Max Payne cheats only by Jodi; Netherlands :: Ariadne’s Sonic Threadball by Panagiotis Koutlemanis, Dimitris Dinieas; Greece, 2006 :: Façade by Michael Mateas, Andrew Stern; U.S.A., 2005 :: The McDonalds’s Videogame by Molleindustria; Italy, 2006 :: Soviet Unterzogersdorf the adventure game by Monochrom; Austria, 2005 :: Book and Volume by Nick Montfort; U.S.A., 2005 :: The making of Balkan wars: the game by Personal Cinema; Greece, 2004 :: Disaffected by Persuasive Games; U.S.A., 2006 :: Interactive Circus by Prof. Marie-Helene Tramus, Cedric Plessier; France, 2005 :: Jumping Rope by Orna Portugaly, Daphna Talithman, Sharon Younger; Israel, 2004 :: Global Conflicts: Palestine by Serious Games Interactive; Denmark, 2005-2006 :: Boys in the Hood by Axel Stockburger; Austria/U.K., 2005 :: The endless forest by Tale of Tales; Belgium, 2005 :: Game Music by Vladimir Todorovic; Serbia, 2004-2005 :: Fijuu2 by Julian Oliver, Steven Pickles; Germany - New Zealand, 2006 :: Coin Snatch by sheismartha, Alexandros Plakidas Dasios; Greece, 2006 :: History Lost Redux by University of the Aegean; Greece, 2006 ::
The democratisation of virtual environments
International New Media Collective
A spectre is haunting Virtual Environments, their democratisation.
It is the spectre of democracy, which after the abandonment of its physical space [real life] sinking more and more in the contradiction of lies and the cancellations that lead to its constitution, it breached the relations with the real and it became itself a spectre. This is how the paradox of spectre occurred, the wandering shadow of democracy. A spectre that tries to penetrate, and aims to democratise the new kind of ghosts that constitute the space which forms a major element of human experience: The Virtual Environments.
The democratisation of virtual environments, computer and video games is one main debate issue in the industry of interactive media. In new media and especially video games the democratisation is related mainly to the fact that the user, in the context of interaction and personalisation, takes major part in the creation process of the games themselves. Mods, thousands of volunteer designers in Half Life, Everquest, Oblivion, and Quake, are examples of how the users participate in the creation process. If we are able to discuss something new in new media it is the user’s capability to participate in the creation process. This capability is not a gift or an offer of a generous democratic technology, but immanence within its own structure, it is an immediate reflection of the conflicts that constitute this capability of participation which has as its final destination democracy.
In other words, speaking of the conditions which arose from the utopian meeting of spectres, which magical ritual could be useful for both the old spectre of democracy and the ‘spectral’ world of virtual environments?
The vast majority of the users understand democracy according to the quantity of interactive “clicks” which represent the voting “right” in cyber space. The image of the voting procedure and democratisation in virtual space isn’t the supposed infinite choice of different web pages or the user’s ability to create personalised narration in a video game; it is the image of the cursor itself over the icon of “choice” accompanied by the sound of the “click” which really is the extension of the image of the poll and the hand that is “capable of choosing”.
How is possible to revive democracy in the realm of the potential, when within the context of the objective it became a commodity that has usually been exported and imported via violence?
August 23, 2006
The Value of Translating the Virtual to the Real
Recursive Instruments is generously supported by Linden Lab through their Fellowship for the Visual and Performing Arts ... The value ascribed to objects, scripts, textures, and land in Second Life contrasts sharply with the value given to similar, material wants. Digital work cannot yet economically compete with comparable real world services. Our initial proposal to Linden Lab investigated these different markets of exchange. The ease of copy and paste undermines traditional economic practice. Without an original—a gold standard—the Linden faces an uphill battle to establish value in the minds of others. We believe that a body of images, objects and ideas exterior to Second Life is of paramount concern for a sustainable environment.
Just as [Simon Spartalian's] words and these slides communicate between you and I, so must Second Life use its content to communicate with culture. Tangible objects can carry a vivid experience outside the world that gave them birth. Our work began at this intersection. Using the Open GL Extractor (OGLE) from OpenLab and EyeBeam Research in NYC we can capture the 3D data behind Second Life. We have a Computerized Numerical Control (CNC) mill for the purpose of carving objects built from this data. Our first construction centered around the landscape of Sheep Island, a Sim which we printed, section by section (acre by acre), into the real world. Read more >> [Related post]
August 21, 2006
Reflecting Back on Screens:
Inclusion & Exclusion
I'm back in Ottawa after a week in Banff with some really great people doing really great stuff, but before I switch my attention to fall teaching I'd like to think out loud about a few things that keep coming to mind.
It seems to me that our conversations on new media art ultimately revealed that it isn't the newness of media that's so interesting, but rather the artness of it. (Yes, I know that's not a word, but bear with me.) While "user-generated content" - or, as I prefer, public authoring and participatory media - repeatedly came up in conversation, it was quickly distinguished from artistic practice. While no one seemed willing to come right out and say it, I think the implied distinction was primarily quality-based, and both aesthetic and cultural quality are notoriously subjective.
My keynote address (which I'll post as soon as possible) chose to turn "screen" from noun to verb in an attempt to draw out the ways in which new media art and design practices involve acts of inclusion and exclusion. I tried to unpack a few of the primary metaphors that feed our notions of mobility, and I invited people to reimagine their senses of community and citizenship based on what it means to be in or out. The point of all this, of course, is to get producers of all sorts to acknowledge their own screening processes. In my mind, the most pressing political and ethical challenge facing us today is how we account for, and become accountable to, differences in perspective and practice. In other words, who gets to decide what constitutes quality content? The government? The broadcaster? The company? The artist? The designer? The academic? The public? And which public is that exactly? When it comes to collaboration, whose interests take precedence in which contexts? (As one artist said to me after my presentation, "I've realised I value art more than people.")
In my panel presentation (which I'll also post shortly) I discussed what I consider to be Proboscis' exemplary collaborative work, and how it was this sense of collaboration that helped shift a broadly technology-focussed project to a culturally-focussed one, or more specifically, how the two became entirely inseparable. Fiddian Warman also showed us a couple of Soda Creative's projects that specifically engage some of these questions, albeit in indirect ways. Both Nahnou-Together and b.tween2cultures explore what it might mean to create distinct cultural identities - together. Or how about this? The Residents and MOMA's new River of Crime Community Art Project seeks out a space for professionals and amateurs to work together. As "an exploration of the rise in popularity of instant-video-creation due to the proliferation of inexpensive video cameras, as well as both still cameras and phones that shoot video," ROCVID invites anyone and everyone to make a video - any way they like - to go with an audio clip provided by the legendary music group. Mass art and art for the masses indeed.
I'm sure I'll continue to think about these things as I prepare for my lecture on mongrel practices of art, design and anthropology at UIUC art + design next month, and as always, comments are welcome.
And for anyone interested, here are all my Interactive Screen 0.6 posts:
IA Screen : Introducing the Canadian new media context
The Convergence Conundrum: A Cross- Canadian Perspective on the Business of Content
Technology, Privilege and Innovation: The Legal Perspective
Creative Commons: Art, Activism and the Database
The View from Outside In: Margins of Art and Activism
The View from Inside Out: Margins of Technology and Business
Playing the Interface
Serious Games: Understanding the grey area between learning and playing
Filming Outside the Cinema
Blast Theory - Day of the Figurines workshop
The Impossibility Box: An Emotional Computation
August 08, 2006
Chit Chat Club
A Teatime Telepresence
Chit Chat Club is an experiment in bringing together the cafe and the online world. Chit Chat Club is now live. In a cafe in downtown San Jose, three inhabitable sculptures sit at tables, waiting to be brought to life. One is an undulating form, another a chair just starting its metamorphosis into human form, and the third is a giant spoon. Be one.
Cafes are social spaces - they are places where people come to converse, to meet friends, to people watch. It is a physical space, rich with the smell of coffee and the sound of chatter, a navigable place that people must negotiate to find good seats, to see and be seen. The cafe is local, fixed in space and reflecting and defining the social structure of the neighborhood.
The online world is also social, but far less physical. Its inhabitants have no bodies, its borders are porous. It is global rather than local. Yet there is a different kind of depth that exists in the online world, a depth of persistence, of vast collections of data, of conversations that remain permanently archived, unlike the ephemera of the spoken word.
Conversing online while in a cafe is not of course a novel idea. Many cafes today feature wireless access, and patrons read blogs, write email, instant message, etc. Yet this arrangement moves the patron's attention from the public and physical space of the cafe to the private world of these typed interactions.
Chit Chat Club brings the online visitors into the public physical space of the cafe. It does this by both providing them with a view of the cafe and by giving them a physical presence with it.
July 21, 2006
Reinstates the Virtual World with an Offline Voice
Artist Jaygo Bloom has been commissioned by folly, a media arts organisation based in Lancaster, to develop the sound work BUMP… for this year’s Futuresonic festival, in partnership with Manchester based digital signage organisation Pixel Inspiration. Futuresonic is the UK’s leading electronic and media arts festival, and takes place in Manchester in July 2006. According to Jaygo Bloom, the commissioned artist, "As computer technology becomes more sophisticated, the technology itself will fade into the background of human activities, becoming far less intrusive than is the case today."
Jaygo Bloom suspects the invisible, therefore he seeks to reverse the need for this technological transparency. BUMP… reinstates the virtual world with an offline voice. BUMP… is an online project that translates an online keystroke into an actual real time and physical event. BUMP… will reinstate the presence of this technological intervention over its environment via the quick shot, rapidfire, 8bit sounds emanating out of its location based folly sound modules located throughout participating venues of Futuresonic 2006. BUMP… presents an interesting view of networked interaction and social performance.
Jaygo Bloom concludes, "By attaching a physical connection to things we perceive as virtual, we can begin to make the intangible tangible."
Jaygo Bloom is a UK based multimedia artist whose practice includes audio visual installation, game hacking, film making and physical computing. Over the past year he has been developing a wide variety of new online and offline projects ranging from his globally aware interactive games console ‘Tao Joystick’ to his 8bit latino percussion interface ‘Marrackattack’. Working independently and also as part of the Glasgow based audio visual team 'Pointless Creations'. Popular for his public interventions, blue screens and old sports car, other successful projects have included Glasgow International05, Pixelache05, Recontres Paris/Berlin, and Pong.Mythos.
July 19, 2006
Visible City + Upgrade! Telaviv-Jerusalem
Visible City -- by Dan Collins, Dianne Hansford, and Ruth Ron -- uses Augmented Reality (AR), Mobile Computing (GPS), and 3D Simulated X-Ray Models developed in animation and CAD applications to visualize and navigate the urban core of San Jose (ISEA2006). Augmented reality (AR) in tandem with GPS-based mobile computing applications allows intuitive information browsing of location referenced information. In AR, the user's perception of the real world is enhanced by computer generated entities such as 3D objects, location markers, superimposed text, and spatialized audio. The interaction with these entities is available in real-time to provide convincing and natural feedback to the user. The coupling of Augmented Reality with location based information (GPS) and high-end 3D modeling enables the integration of virtual entities with real world objects in a seamless manner.
Ruth Ron is an architect and digital artist, whose work focuses on the extension and deformation of real space by using virtual assets. Her work explores the borders between architecture and technology, form and media. She has exhibited in New York, Seattle, Paris and Jerusalem. In her lecture, Ruth will show samples of her spatial installations, online interactive work and some current research of interface between the computer and the physical environment.
Upgrade! Tel-Aviv-Jerusalem: Monday, 24.7.06, 19:30 @Minshar, David Chachmi st. 18, Tel Aviv.
July 18, 2006
SWM05: Distributed Bodies of Musical-Visual Form
SWM05: Distributed Bodies of Musical-Visual Form [SWM05] -- by Troy Innocent and Ollie Olsen with the Shaolin Wooden Men and Harry Lee -- features the distributed bodies of musical-visual form that are inhabited by the Shaolin Wooden Men (SWM), a virtual band, a 'gang of numbers' -- me(a)tacodeflesh. SWM require your assistance to manifest as media creatures. They invite you to send them images of your local environment in which they can appear. Sending images unlocks access to the SWM05 mobile site which consists of downloadable micromusic ringtones and small screen machinima performances. The SWM are everywhere. In a meshwork of wireless entities, they are media creatures seeking a fragmented existence to be consumed in the nanoseconds of play-time in the emerging wireless net. SWM05 will transfigure the SWM by embodying them in a new materiality.
SWM05: Distributed Bodies of Musical-Visual Form is a 2005 commission of New Radio and Performing Arts, Inc., (aka Ether-Ore) for its Turbulence web site. It was made possible with funding from the Andy Warhol Foundation for the Visual Arts.
The SHAOLIN WOODEN MEN are a 'gang of numbers' whose bodies are 'made of sound'. In their various manifestations they have released three full-length recordings - "S.W.M. " (1992), "The Hungry Forest" (1994) and "Supermindway" (2001) - and a collection of singles and remixed released on the Psy-Harmonics label. The S.W.M. work across image, sound and interactivity and have performed at DEAF96 and exhibited at ISEA96. Typically, they require the assistance of creative humans to manifest as media creatures to be distributed across the net.
TROY INNOCENT has been exploring the 'language of computers' and the new aesthetics of digital space since 1989. In recognition of this work, Innocent has been described as "the first philologist of virtual reality". His artificial worlds – Iconica (SIGGRAPH 98, USA), Semiomorph (ISEA02, Japan), and lifeSigns (Ars Electronica 2004, Austria) and Ludea (SIGGRAPH2006, USA) – explore the dynamic between the iconic ideal and the personal specific, the real and the simulated, and the way in which our identity is shaped by language and communication. He is currently Senior Lecturer, Department of Multimedia and Digital Arts, Monash University, Melbourne.
OLLIE OLSEN is an Australian composer, synthesist and sound designer who has been producing and performing rock, electronic and experimental music for the past thirty years. Projects include "Max Q," "NO," "Third Eye," "Orchestra of Skin and Bone," "Shaolin Wooden Men," and "I am the Server." Some recent collaborations and projects include performing with Negativland (from USA-2001); guest soloist with the Australian Art Orchestra (2002); and recording with Japanese bands, BOREDOMS and AOA (2001-2002.
HARRY LEE is a web developer working with Macromedia Flash, SQL, PHP and related technologies. Recent projects include database development for lifeSigns, exhibited at the Australian Centre for the Moving Image (ACMI) in 2004, in addition to numerous corporate and education projects. He lectures in multimedia and digital arts in the Faculty of Art & Design at Monash University.
(a 1st Life/2nd Life mashup)
I've started working on a bit of summer laboratory experiment to see how Google Earth could become a platform for realtime mobile gaming. (Follow the link on the Flickr photo page to the URL you can load in your Google Earth client to see the game board in its current state.) With Google Earth open enough to place objects dynamically using the tag, a bit of SketchUp modeling and borrowing an enormous battleship model that construction dude uploaded to the SketchUp/Google 3D Warehouse, I started plugging away at a simple game mechanic based on the old Milton Bradley Battleship game.
Battleship, for those of you who never played, has a simple mechanic — two players set up their navy ships on a peg board, hidden from the other guy. You take turns plugging a peg into your side of the board, with each peg hole designated by a letter/number coordinate grid. When you plug a peg in, you say where you put it — E4! If your opponent has a ship in that coordinate (or part of one, actually), they say, sorrowfully, "Hit!" and you register that peg hole with a color to indicate a hit. If not, you just put in a neutral peg to remind you that you already tried that spot. The game continues into one player has sunk all the other guys ships.
The mechanic I'm experimenting with is simpler. One person places their ships using Google Earth and the other person goes out in the normal world with a mobile phone, a GPS connected to the mobile phone. The phone has a small Python script on it that reads the GPS and sends the data to the game engine, which then updates the Google Earth KML model showing the current state of the game grid. When the player who's trying to sink the ships wants to try for a hit, they call into the game engine and say "drop". The game reads back the coordinates at which the "peg" was dropped and shortly thereafter, the other player will see the peg appear at the coordinate it was dropped. If the peg hits one of the ships, it's a Hit, otherwise it's a miss.
As I continue developing the engine, I'll probably have the game engine let you know when you call in to do the "drop" whether it was a hit or not, or the opposing player can text or call to indicate the same.
I want to put in a "ping" command for the call-in battleship control center to help whoever's wandering around in the world navigate a bit. (Although the game is only really practical if you limit the boundaries over which it can be played.)
I need a lighter weight battleship — the current SketchUp model is too large, in data size terms and takes too long to initially load (although, it only needs to be loaded once.)
* Experiment with "1st Life" action reflected in "2nd Life" worlds (verso of the folly Ender suffered in Orson Scott Card's simply fascinating Ender's Game
* Learn KML
* Learn SketchUp
* Learn Python for S60
* Make a mobile/pervasive game in which one has to move around in order to play
* Google Earth client
* Apache+Tomcat+MySQL (Java and JSP on the server-side computer)
* Nokia N70 and a little Python app to connect to the Bluetooth GPS and upload the data to the server
* Voice Application (for the battleship control center to drop/ping)
Time Committed: * About 2 days learning stuff, and 1/2 a day programming the computer to make it do things.
Why do I blog this? To keep track of and share the near future laboratory experiments I'm doing this summer.
Technorati Tags: mobile, pervasive electronic games, pervasive media, play, urban play
[blogged by Julian Bleecker on research techkwondo]
July 15, 2006
The Social Potential of Urban Screens
"...The emergence of the internet culture has brought new ways of participation and exchange to challenge hierarchical authorship. The 'new forms of creation mediated by networks more and more remote, fast and wireless' (Beiguelman, 2006) derived from this culture, influence new productions of public space. Artists are exploring the potential of the growing interconnections between online and offline worlds, and between social experiences in virtual and physical space. Wallace (2003) sees the internet connected to screens 'as a delivery mechanism to inhabit and or change actual urban spaces'. We can find various community experiments in the growing field of social computing: friend-of-a-friend communities; participatory experiments in content creation in the mailing list culture; and more recently, the wiki websites (where users can add and edit content) and blogging systems that serve an increased need for self-expression. By connecting large outdoor screens with digital experiments in online worlds, the culture of collaborative content production and networking can be brought to a wider audience for inspiration and engagement..." From The Social Potential of Urban Screens by Mirjam Struppek, Visual Communication, Volume 5, No. 2, Sage Publications June 2006, p 173-188.
July 07, 2006
ZeroOne San Jose
SimVeillance by Katherine Isbister and Rainey Straus : San Jose re-presents urban passersby within a game environment that mirrors a ‘real-world’ public space. The artists will recreate the Cesar Chavez plaza in downtown San Jose using the Sims 2, and will work from images captured by surveillance cameras trained on the square, to populate the simulated square with replicas of ‘real’ transients.
The final installation will have two displays. On one, the game running, populated with the borrowed transients. On the other, a slide show with paired images: surveillance photo and digital snapshot of the ‘Sim’ that was created in the likeness of the real person.
July 06, 2006
Hyper-Instruments and Immersive Spaces
VIRTUAL MUSIC COMPOSITION: Unsatisfied with just creating virtual plant life, a provactive multiplayer game, and ambitious machinima, Robbie Dingo has also been hard at work creating virtual world musical instruments that actually play in-world in real time. He extensively documents the creation of his "Hyper-Instruments" on his blog here and here, and recently sent me word of a song-writing contest for music composed to be performed on his new SL cello.
ADVENTURES IN IMMERSION: Seasoned RL/SL architect Jauani Wu takes us on a personal tour of successfully immersive spaces in Second Life, accompanied by a 3D designer's manifesto written e.e. cummings style: immersion requires depth. it's not sufficient for one enclosed space to be finely constructed. it requires that the next space be so as well. and the space after that. immersion works better when the surface of one space frames the next. it reinforces the notion of world. By my lights, mandatory reading for builders looking to truly transform the world according to their vision. Read it all here. [posted on New World Notes]
New Augmented Space and its Memory
The concepts of place, proximity and distance are going to be substantially rewritten by the locative media evolutions. The origins of this process are not only related to the abstract digitalization or to the latest hi-tech gadgets, but they are more significantly rooted in social changes induced by the overflowing of the net space. The social access and the subsequent construction of net space started in middle nineties, so the perceived 'space', as a public, collective and shared dimension was slowly literary doubled in the digital dimension. The next step is the actual ongoing process of joining back these two halves (real and virtual space) in an augmented reality that is a real experience. If the man on the moon broke the 'external' boundaries perception of that time, the net integrated in the urban space has definitively broken the actual inner spatial boundaries outdating the established physical perception as the only possible coordinates to sense the space.
The net is based on one hand on a subject (its IP) directly addressable from every (good or evil) point of view. On the other hand the same subject is able to add his own data and memory to his own space, building a digital environment piece by piece. These tough changes are investigated and enhanced in the impressive work made by the Sonarama 2006 curatorial team (José Luis de Vicente, óscar Abril Ascaso and Drew Hemment and Advanced Music). After Micronations (2004) and Randonnée (2005) the last part of this contemporary territory conceptual mapping has been accomplished through a selection that emphasize different "permutations of the mobile syntax" as de Vincente defines it. In the ground floor of the CCCB, the exhibited works were placed in dark corridors and rooms, that like a maze, was an ideal location.
The most awaited project was undoubtedly Blast Theory's premiere of 'The day of the figurines', a social multiplayer game played via sms, reflected in a fictitious world of real figurines. With more than three hundred players in three days (actually it can hosts up to one thousand players) it shaped a small world that was reflected in a physical update. In fact authors moved figurines and forms by hand every twenty minutes, updating the representation of the online latest developments. Here again the usual 'virtualization' of reality structures was inverted. The process was the opposite of webcams: instead of taking a glimpse of a real space, the mapping of the online process was a slow update of a plastic scenario, much more attractive and meaningful to our instincts than a real-time colorful graphic on a screen. In the locative media horizon, many are the shades of the distance.
Zexe.net was a project by The Golden Nica winner Antoni Abad with the same concept of giving mobile phone with cameras to special group of people and selecting their pictures/vision, focused on latin taxi drivers, gipsies and prostitutes. How far is their own landscape, represented through our shining technology? And changing perspective how far is our own home, if we can easily see it from a satellite view? Jeremy Wood's Meridians, a psychogeography text composed as a gps-draw on a very famous London area, remind us the shock of zooming on a photograph (not anymore an abstract color representation) of our own daily territory. This is impacting our neighborhood vision, temporary establishing a center where we live (or we are at the moment) and considering the world as a surrounding. It's a sort of ego-geography, able to stunningly visualize as real what we've already seen as a zoomed out draw.
Amongst the other showed (or performed) projects included Michelle Teran's classic Life's a User's Manual, Counts Media social platform Yellow Arrow, the last Transmediale software art award winner Socialfiction's .walk (article on Neural 23), another psychogeographical approach in Alejandro Duque's TTSM (Typewriter Tracklog sewing Machine), a new iPod version of the imaginative G-Player, the relationship between space and sensations in Christian Nold's Bio Mapping, the performance by accident, treasure hunt Geocaching practices and the RFID hacking of Preemptive media's Zapped! like the coackroach with an RFID on his back confusing a supermarket reader because of its unpredictable movements. The urban space is a collective memory mirror. And the memories of the last thirty years have been mediatised more than any previous period. The process of easily juxtaposing personal memories, histories and territories would enable another new level of consciousness of reality, stratified on our mediated identity. View the Sonar 2006 photo gallery. [posted on Neural]
July 05, 2006
The MagicBook + The Book Radio
What's New in Books
The MagicBook explores seamless transition between reality and virtual reality. When users look at the pages of a real book through a hand held display, they are able to see virtual content superimposed over the real pages, that is augmented reality. When they see an augmented reality scene they like, users can fly into the scene and experience it as an immersive virtual environment. Currently the user can transition smoothly between these two fixed viewing modes: the augmented-reality view and the virtual-reality view.
The MagicBook also features functionality that supports collaboration: When several users look at the same book page they can see the augmented reality image from their own viewpoint. When one of these users decides to switch to the Virtual Reality mode and “fly” into the virtual model, the other users are able see him or her as a virtual character in the scene.
The Book Radio is a design project by Vinay Venkatraman & Pei Yu from the Interaction Design Institute Ivrea. The Book Radio appears to be a regular looking sketch book, but sensors and a microcontroller embedded in the spine allows its users to actually scan radio channels just by flipping the pages of the book. Book marking of radio stations and annotation of favourite programs by writing on to the relevant pages are also supported, and the volume can be changed by sliding the associated book mark.
The Book Radio is wireless and holds no screens, dials or buttons. Thus, the project is concerned with exploring the possibilities of creating intuitive and subtle interfaces.
Handheld Augmented Reality
PDA + Studierstube 4.0
Handheld Augmented Reality: A standard, off-the-shelf Personal Digital Assistant (PDA) constitutes a cost-effective and lightweight hardware platform for Augmented Reality (AR). A PDA provides a simple, well-known user interface, and is fully equipped with a touch-screen and camera for providing a video see-through Magic Lens metaphor of interaction. In our Handheld AR framework, all interactive processing is done exclusively on the PDA without relying on a server infrastructure, which makes this solution highly scalable. Because of the low cost and suitable ergonomic properties of the PDA platform, massive multi-user AR application become possible for the first time.
It is the goal of this project to demonstrate the first multi-user AR applications with dozens of simultaneous users.
In our Handheld AR framework, all interactive processing is done exclusively on the PDA without relying on a server infrastructure, which makes this solution highly scalable. Because of the low cost and suitable ergonomic properties of the PDA platform, massive multi-user AR application become possible for the first time. It is the goal of this project to demonstrate the first multi-user AR applications with dozens of simultaneous users.
Our software framework Studierstube 4.0 represents the first attempt at creating a complete solution for AR on PDAs. It operates cross-platform (Windows, Linux, Windows.CE) and addresses graphics, video, tracking, multimedia playback, persistent storage, multi-user synchronisation and application authoring tools.
June 28, 2006
TRG [Transreality Generators]
The 'Irreal' in New Media Art
TRG [Transreality Generators]--by FoAM--is a project that builds upon the two previous immersive installations / responsive environments: TGarden and txOom. All three projects are concerned with the 'irreal' in new media art. By 'irreal' we mean artworks that provoke a tension or imbalance between tangible reality and the imaginary worlds. With responsive environments, the line between the real and the imaginary can be very thin, allowing for the experience to become 'irreal' - where the participants fade in and out of physical reality, uncertain of their position in this 'reality continuum'. TGarden was designed to allow human gestures to use video and audio as calligraphic media, to 'write and draw' the immersive environment on the fly. txOom extended the concept and became an 'irreal ecology' where media would grow based on their interaction with the participants. TRG changes scale once more, to infinitely large and infinitely small 'irreal universes', whose existence is highly unstable and unpredictable, where minuscule local interactions can conjure up the lives massive worlds.
The conceptual framework within which the artists develop their concepts and designs is 'simulation of physics and physicality of simulation'. In order to be able to 'mix' the physical and the virtual reality in interesting ways, the artists explore the basic principles that constitute our universe (gravity, electromagnetism, nuclear forces) and create media worlds as new universes, where physical laws become amplified, distorted, enlarged etc. To the audience immersed in the MR environment, these worlds should 'feel' as familiar and intuitive, but upon closer inspection their strange and magical properties can become apparent. The TRG team creates rich media worlds consisting of materials, objects, interfaces and architectures that allow a seamless transition between the physical reality and the computational worlds. By correlating some of the more interesting properties in the physical systems (such as fundamental physical forces, string theory and membrane physics, tensegrity principles, etc.), and abstracting them, there is the potential to create new (coherent) laws within the media-worlds.
The project focuses on Mixed Reality (environments containing significant virtual and physical interaction possibilities, strongly intertwined) and exploring its implications in the cultural sphere. Mixed Reality (MR) enables the public to be immersed in multisensory, responsive media environments capable of reacting and evolving, influenced by human activity and interaction. The participants leave the role of the observers and become creators of a temporary autonomous reality. In TRG, an international team of artists and technologists explore the frontiers of this unique form of creative expression through the integration of self-contained systems of media, materials and structures, perceived as a distinct field of reality, embedded in the physical objects and architecture. This project examines the potential to extend their artistic practice into the field of situational experience design, in which the art-works become all-encompassing art-worlds. [Related]
June 22, 2006
Behold the Brain
BrainMirror, a Swedish-Hungarian co production by Adam Somlai-Fischer(hu) Bengt Sjölén(se) and Danil Lundbäck(se), is an interactive experience where the image of the visitors' brain appears mixed with his/her mirror image, using natural head movement as an interface to explore volumetric visuals of the human brain. The interface allows exploration of complex MRI data. Volumetric renderings are explored by moving closer to the image.
June 14, 2006
My Name is Madison
Taking Madison, WI as its subject, My Name is Madison is an Augmented Reality Game that allows users to explore and interact with the urban landscape from a multitude of perspectives. This project approachs the city as a layered environment. Players understand the development of place through the eyes of history, culture and fantasy.
Using GPS enabled hand held computers, participants take on the roles of both recipient and creator, performance in context. While walking about the streets, they are provided with information that enhances their understanding of the environment and then gives them the tools to create their own interpretations of place. Documentation of these events will be posted to mynameismadison.
The project opens as a part of the Games, Learning and Society Conference taking place in Madison, WI, June 15-16.
June 13, 2006
In Second Life, Too
The video blog conference under the name of vloggercon was held in San Francisco on June 10 & 11. Because the event was sold out, it also took place in Second Life. The location was the Hipcast Conference Center at the Shalida sim. Video.
VLOGGERCON 2006 is the intersection between media-makers and technology. A space for dialog and interaction. Of creation and collaboration. A media village born on the internet, and making camp for one weekend in San Francisco. In the past year, videoblogging has grown into a myriad of forms with a worldwide voice. And your voice is needed at Vloggercon– after all, the media is being made by you so it’s your voice that we want to hear! Vloggercon is a two-day collection of roundtable discussions about the emergence and the future of this new medium; not a series of lec-tures. (via Tao Takashi and unmediated]
June 12, 2006
de Pong Game™
De Pong Game is a recycling of the famous game PONG [Atari 1975-1977]. This new version has been built with Flash [Action Script] by Arjan Westerdiep for Recyclism™. De Pong Game explores the concept of Augmented Reality by using and interacting with urban architectures [buildings] as background for the game; the game is projected on a building and the limits of that building become the limits of the game area.
The projected ball bounces along the limits of the walls. The software also uses the windows as obstacles for the game. So the ball is limited to the frame of the building. As you touch the ball with the slider, its speed increases, and because the ball bounces on all the obstacles of the architecture, it becomes more and more difficult to play. [Related: Space Invaders]
June 09, 2006
Listening and Dancing in Second Life
Taking Your Avatar to a Concert or Dance Class
"...A couple of Sundays ago, I took my avatar to a small pond where about a dozen people lingered in the late afternoon sun, floating on inflatable rafts and inner tubes while a pair of animated swans jerked their way across the water. Everyone was turned toward Few, sitting on the deck above the pond with his acoustic guitar, and everyone nearby could hear a feed of the real-life Hokin, from his real-life basement, playing cover songs.
While I dug Few's proficiently-played but laid-back standards, I wasn't here to decide whether this was the best version of "The Girl from Ipanema" I've ever heard so much as to check out the experience-- and it was surprisingly engaging. If it seems strange to come to a video game to watch a concert, remember that we've been listening remotely to live shows since the first days of radio. In that sense, this is nothing new-- and while it's more disorienting than tuning in to the Grand Ole Opry, it's more rewarding than other options, like watching a streaming video of a rock show through a tiny RealPlayer window. At least in Second Life, you can move the camera..." From Get That Out of Your Mouth #24: The Show Must Go Online by Chris Dahlen.
"...Now with the growth in dancing in this alternative reality, I have more of an incentive to make an avatar for myself and start wondering around and going to clubs, get-togethers and, even, dance studio. This notice caught my attention:
This past Tuesday, the Takeshi's Dance Studio in Second Life held an event to teach people how to dance. Plus, they will soon be building a theater and they need dancers ... Dancing doesn't just take place in virtual clubs. Events on private islands also feature music and dancing. Earlier this year, Creative Commons hosted a party...
In "Get That Out of Your Mouth #24 - The Show Must Go Online," Chris Dahlen describes the experience of listening to music in Second Life, how indie bands take virtual tours and make money, and the limitations of gestures and dance in this virtual world:
Players also can't control their avatars with any spontaneity or nuance; if you want to dance at a show, you can trigger an animation that runs your avatar through the steps, but you can't personally step in to make the arm rest on your partner's back just so. The same problem applies for performers. If you look closely at Few as he's playing guitar, you'll see that he can't strum or move his hand on the fretboard, and his face is trapped in a stiff mask, like a dummy in a vintage Disney World ride. And while you can add more animations and poses to an avatar, if you have the time and you're handy with 3-D modeling software, you can't make the spur-of-the-moment gestures that even stiff indie rockers need to liven up their acts.
- And Rik Riel (his SL name) in his "Dancing with myself... with other people" post, says that he goes to a lot of SL events and music gigs. He describes how dancing works:
In Second Life, you are given a small set of pre-defined dance animations that you can trigger from your keyboard. It's easy to pick up other animations at parties or from friends. You can easily perform everything from a waltz to b-boying to country two-step (not to mention more, uh, provocative dancing).
But he's not really sold on this virtual dancing:
A real basic question is Why would you do this? Virtual dancing provides none of the perks of real world dancing, either the joy of movement, the physical response to music you are digging, the pleasure in touching another person, or the pride of executing a difficult move. There's no learning curve, no physical exertion, no effort to anything, except maybe the effort involved in designing your own custom animations.
I've been dancing my whole life, from all of the terrible 80's trends (smurf, running man, pac man, etc.) to popping and locking, salsa, tap, and now lindy hop. There are few other moments when I am happier than when I am dancing. There is nothing to compare to sharing great music with someone you care for swaying in your arms. I love dancing so much, I help administer the largest lindy hop community website on the net, Yehoodi.com. [blogged by Doug on Great Dance Weblog]
June 08, 2006
Space Invaders 2006
Isn't Just a Fictional Space
Video game, as an interactive medium, has been designed for many different purposes such as education, entertainment, advertising and etc. What makes a game compelling as an experience in order to satisfy different purposes is the purpose of Space Invaders 2006. "Space Invaders 2006 " -- by Evan Barba and Kuan Huang (aka Hawk) -- is a public video game which isn't just fictional space that the players escape into but takes the advantage of a real world space with the architecture inside and transforms them into a game playground. Basically, "SI 2006" is a video game which is projected onto a building. The player controls an aircraft by moving his/her body in the space to shoot down the invaders before they move off the building.
June 07, 2006
Metaverse meets mash-up:
FREE CULTURE REMIX IN SECOND LIFE
Metaverse meets mash-up: come June 15, there's going to be another Free Culture/Creative Commons event in Second Life, this one springboarding off the recent CC Art Show 2006 at NYU and the Sharing is Daring event at Harvard. The object here is to take the art featured at those sites, and then remix it for an in-world showing on the 15th. You can do the remixing with Photoshop and other standard tools, of course, but for this event, the ideal medium is SL itself. Which is what I did with an Untitled photo by Joseph Gergel (above), uploading it as a texture, displaying it in-world, and using it as a backdrop for a dramatic screenshot, above.
No doubt Residents can come up with way better remixes, taking the appropriately-licensed art from here and here*, then converting them into screenshots, 3D sculptures, interactive sites, whatever. I'm looking forward to reporting on what comes out of this.
Deadline for submission is end of June 13. E-mail a screenshot of your entry to Jennifer Yip of Creative Commons (Genevieve Junot in SL), and be sure to include your RL and/or SL name for proper credit. Prizes will be virtual CC t-shirts and other tchotchkes, real and virtual. In any case, do come to the event to share your work, and hopefully meet some the original artists who may also be in attendance. (Full disclosure: I'm now honored to be a part-time consultant for CC's events and activites in SL.) ... [blogged by rubaiyat on New World Notes]
June 02, 2006
Second Life & Flickr Mashups
Alas, Second Life! Web 2.0 in a Virtual World
Second Life has been my new hacking obsession ever since I bought a laptop fast enough to run it. I don't spend a lot of time socialising in the gameworld, but I am fascinated by the possibilities for makers of new user interfaces, useful virtual objects and playful toys. With every object being scriptable, aware and active, it's a proving ground for Everyware.
Version 1.10 was released last week, and hidden among the exciting new visual modeling possibilities of shiny rendering and flexible objects was Second Life's own XMLHTTPRequest: llHTTPRequest. Using asynchronous callbacks, it gives the platform an important new capability: communication with the web on demand. A lot of what we are learning about AJAX makes sense here, in this world of Asynchronous Lindenscriptinglanguage And Some-sort-of-data (ALAS!)
I've spent a few hours hacking on some toy objects with this new capability, starting with the mashup de rigeur: Flickr integration. My home in SL now sports a simple picture frame. Touch it and it looks up your avatar name to see what your favourite Flickr tag is, picks a random picture with that tag from Flickr and displays it on its surface. If it hasn't met you before, it asks you to tell it what tag to use.
Because Second Life is so wonderfully visual, here's a little demo movie that I recorded with Tom Coates:
how it works ... [posted by Matt Biddulph on Hackdiary] [via Wonderland]
May 30, 2006
SonarMatica presents Always On
Into the Streets
ALWAYS ON--curated by Drew Hemment, José Luis de Vicente, Óscar Abril Ascaso and Advanced Music--is a display dedicated to mobile culture and location projects. It is an initiative based on exhibition and participation, taking SonarMática out onto the streets for the first time. Advance tickets for Sónar by Night as well as general passes for the three days and two nights of Sónar 2006 are already on sale.
This year, Sonarmatica presents itself more participative than ever: a game involving relationships using mobile telephones for nine hundred players at the same time organised by Blast Theory; a walk through Barcelona's electromagnetic old quarter of Ciutat Vella with Michelle Teran; and a geocaching session are some of the ideas at the exhibition.
:: Participation Projects :: Those wishing to register to take part in the participation projects should go to the information stand in the exhibition: Blast Theory (UK), Day of the Figurines; Michelle Teran (CA), Life: A User's Manual; Akitsugu Maebayashi (JP), Sonic Interface; Counts Media (US), Yellow Arrow; Geocaching
:: Exhibition Projects :: Antoni Abad (ES), Taxistas, gitanos y prostitutas transmiten desde móviles en www.zexe.net; Alejandro Duque (CO), TTSM, http://soup.znerol.ch, http://co.lab.cohete.net, www1.autistici.org/communa/platanal;
Jeremy Wood (UK), Meridians; Jens Brand (DE), gPod / G-Player; Preemptive Media (UK), Zapped!; Proboscis (UK), Urban Tapestries / Social Tapestries; Christian Nold (UK), Bio Mapping; Mark Shepard (US), Tactical Sound Garden Toolkit; Socialfiction (NL), .walk; Jeff Knowlton / Naomi Spellman (US), The Interpretive Engine for Various Places on Earth; ExtraMàtica, raster-noton, Essential Room.
Advance tickets for Sónar by Night as well as general passes for the three days and two nights of Sónar 2006 are already on sale. Advance Tickets On-line: www.ticktackticket.com. By phone: +34 93 445 06 60: from abroad, opening hours Mon-Fri from 10.00am to 10.00pm; 902 150 025: from Spain, opening hours Mon-Sun from 10.00am until 10.00pm, from abroad, opening hours Sat-Sun from 10.00am until 10.00pm.
Note: Neither general passes nor accreditations give admission to the Thursday the 15th concert at the L'Auditori de Barcelona
All the information at:
May 24, 2006
DIGITAL TERRITORY: BUBBLES
The Domestication of the Ambient Intelligence Space
"[...] By defining digital borders, the vision of digital territory creates a continuum between the physical world and its digitised counterpart. The construction of digital boundaries consolidates the gateways already established between these two worlds. This paradox will be catalysed by the implementation of a growing number of bridges between the two environments. Location-based services, radio frequency identification tags, body implants, ambient intelligence sensors, etc. will permit the implementation of a trustworthy environment and therefore the domestication of the ambient intelligence space by the individual. The vision will facilitate the transition through a traditional society that coexists with an information society, to a single society whose citizens have accepted and adopted the fusion of physical and digital realities. In this future society, people will still be able to control and manage distance from others with new tools provided by ambient intelligence space technologies." From DIGITAL TERRITORY: BUBBLES by Laurent Beslay and Hannu Hakala.
May 22, 2006
Virtual Sumo is inspired by sumo, the Japanese sport in which two contestants try to force their adversary outside the competition area or make him touch the rug. The attack in preceded by a phase of mutual observation (shikiri) in which the fighters use eye contact, study and strategic ploys to make their opponent lose concentration. In this phase it seems that even slight changes on the exposed areas of the contestants’ skin would allow both opponent and audience to judge the state of fighter’s emotions.
In the Virtual Sumo installation two fighters, seated at a table with a monitor set so that they are obliged to assume a position similar to that used in sumo, rest their left hands on a sensor that records the state of their emotions (heartbeat and surface perspiration). On the monitor the competition area can be seen, a circle in which two game pieces represent the two contestants; their emotional state is also visualized, moment by moment. The contestants can thus check on the screen their own and the other’s state of concentration and, given the proximity of their face to that of their opponent, can use disturbing stategies.
The more emotionally neutral their state of mind is, the more their (virtual) energy grows. When they think the moment has come, by pressing a button with their right hand they launch the attack: The piece that represents them flings itself at the opponent’s piece tryng to throw it out of the circle. On the wall above the two contestants a large screen will dramatically reveal the alternation of the two fighters’ emotional responses.
May 19, 2006
America's Army: Special Forces (Overmatch)
War Vets Feature in US Army Game
War Vets Feature in US Army Game by Alfred Hermida, Technology editor, BBC News website: The US military is putting soldiers who fought in Iraq and Afghanistan into its video game in an attempt to put a human face on its operations overseas. The soldiers' stories are part of a new version of America's Army, a game which is part of its recruitment strategy.
The US military denies that the project is a way of selling an unpopular war to the American public. Instead the free game is designed to inspire young people by relating the real-life experiences of the troops. "The majority of the public is not touched by the war," said Major Jason Amerine, who served in Afghanistan and is one of the nine soldiers appearing in the game. "It is dangerous to have an army overseas fighting and for a public to become so disconnected that they don't realise that we are not automatons over there. We're real people," he told the BBC News website.
America's Army was first released in July 2002 and has gone on to become one of the top online games, with seven million registered players.
It taps into the popularity of military and war simulation titles among predominantly young men.
It shows we are not robots, that we're not trained killing machines. We're just people.
It was developed by the Army to simulate combat and life in the military and reach young people for whom video games are a part of everyday life.
In the latest version of the game, called America's Army: Special Forces (Overmatch), the military is adding the experiences of nine soldiers who served in Afghanistan or Iraq.
The title has a so-called real heroes section when players can learn about the real-life troops.
In a virtual recruiting room, gamers can click on a soldier to hear them tell their story. They can also compare their achievements in the game to those of the GIs.
"It gets our stories out there about what the army is doing," said Sergeant Matt Zedwick, who served and was wounded during a tour of duty in Iraq in 2004.
"It is a good communications device to introduce people to what is really going on, rather than what you see on the news."
"It shows we are not robots, that we're not trained killing machines. We're just people," said the 25-year-old who is now studying graphic design at a college in Oregon.
Reaching the public
The US military sees America's Army as an information, rather than as a recruitment, tool.
The latest version was showcased at last week's E3 games expo in Los Angeles and is due to be released in the summer.
It comes at a time of growing unease in the US about the war in Iraq.
And last year, an official US government study warned that the military was falling behind in its effort to recruit troops for combat operations in Iraq and Afghanistan.
But the people involved in the title say there is no connection between the new game and America's troubles over its combat operations.
"We're not talking about the war, we're talking about people in the war," insisted Major Amerine, who served in Afghanistan in 2001 during the initial invasion.
"We are not attempting in any way to sell a war to the people, that has nothing to do with our agenda.
"We're trying to create awareness about the people doing the fighting, and that is a very apolitical agenda," said the major who now teaches at West Point.
As part of "real heroes" campaign, figures of the soldiers featured in the game are to go on sale.
The Army says these are not designed to be toys for children but items for collectors who wish to express their support for the troops.
Seeing themselves as little plastic soldiers is a strange experience for the troops involved.
"It is kinda funny and a little creepy. Unbelievable in a way," said Sergeant Zedwick.
"It is kind of funny to think there are going to be millions of people buying little dolls of you in the stores." [posted by Ryan Griffis on Rhizome Raw]
May 18, 2006
PSP SSS (Site Specific System) + Nintendo Amusement Park
"Physically Augmented Reality" Games
They have imagined specific structures in specific places in the city for specific PSP communities. Rather than sitting in the comfort of your home playing against other people who are physically distant, the Site Specific System provides a platform for the ultimate physical-virtual gaming session. For example a platform built above a road would be particularly suitable for people to play racing games.
The System takes on the characteristics of a high tech factory processing machine, containing steel panels that move via pneumatic pistons. The player is taken down into the system by a lift and positioned in their seat, the panels move in around them so that they fit neatly around the player like a glove. A panel is then catapulted away from under their feet, leaving their feet dangling to feel the swirling air and amplified noise of the traffic below. Each player has an additional set of buttons attached to their PlayStation Portable, which allow them to control the panels of their opponents. (video)
A project developed at the PSP Design Club, a lab where European creatives were invited to create a manifesto incorporating the PSP and drawing on inspiration from one of its core values "freedom". Via Digital Experience.
Much more physically challenging:
ITP students Daniel Albritton, Huang-ling Chen and Noah Shibley's aim with the Nintendo Amusement Park project is to build a life-sized game of Super Mario Brothers that you jump through in real space. The amusement park would use mechanical technology to give a player super powers which they must learn to control as they bound through a massive obstacle course in 3D space.
This includes a special jumping power that would be provided by an intelligent winch mounted on a two axis crane in the ceiling. It would work similarly to the types used for Hollywood stunts.
The early prototype lets the user begin exploring the sensation of augmented jumping power. The player is strapped into a parachute harness, bungees are attached to a motorized trellis in the ceiling, and the trellis is raised until the users feet just touch the ground. They are left to explore the 'bounce space' for a few minutes, and are then tasked with a few challenges (jump over an object, hit an object high in the air, avoid an object moving on the ground).
May 15, 2006
RFID Snakes and Ladders
The Real World as Board for Online Game
RFID Snakes and Ladders is a giant game of snakes and ladders that shows how RFID (Radio Frequency Identification) technology can be used beyond the warehouse door. The game has been developed by new media company Blink and is being played at the Media Centre, Huddersfield, until June 2nd. It’s just like any other game of snakes and ladders, except the whole building – which is home to over 120 separate small businesses - becomes part of the game board.
First and foremost, RFID Snakes and Ladders is fun. Players are issued with a counter containing an RFID chip. All they have to do to get into the game is pass it in front of the giant dice and check how far they ‘rolled’ on a connected website. If they land at the bottom of the ladder, they pass their counter in front of the model ladder to climb it. If they hit the head of a snake, it’s a slippery slide unless they get to the model snake in time.
Models are placed at intersections around the Media Centre: in the entrance, the café bar and outside the lifts.
Players take a turn every three hours (unless they throw a six or land on a snake or a ladder, when they have to act quickly). In this way the game becomes part of the normal working day. Players might take a turn when they first arrive, another when they grab a coffee, and another at lunchtime. Playing takes just a few moments each day.
And if they land on a bonus square, they get a secret bonus prize. Players who reach the top square first win prizes from the centre’s Café Ollo menu.
RFID Snakes and Ladders is one of a number of experiments in mixed reality gaming, using the real world as the board for an online game, including Pac-Manhattan in New York and Mogi in Tokyo.
But it’s something more besides. It’s also a way of connecting people who may ostensibly inhabit the same location, but who often feel isolated from each other.
“After the Media Centre we hope to test the game in places where people are in danger of feeling cut off or lonely,” said Andrew Wilson of Blink Media. “For example in sheltered housing or the children’s ward of a hospital, where playing a simple, ongoing game with others might help to bring people together face to face.”
For more information on RFID Snakes and Ladders contact Andrew Wilson at Andrew[at]blinkmedia.org or 07980 224927.
Notes For Editors
Blink are a Huddersfield based company working with film, new media and mobile technology since 1999. Projects include the Guardian newspaper’s interactive SMS event in 2001, which taught the mainstream media how to use text messaging.
The Snakes and Ladders game programming and web design was by C.H.I.P.S. vzw (www.chipsvzw.be), new media specialists based in Antwerp, Belgium. In 2004 they created the SMS project STADSchromosomen for UNESCO’s World Book Capital Antwerp.
RFID hardware was designed and built by Daniel Blackburn of Carbon Based Games, Huddersfield. Daniel was recently awarded a two year Fellowship by the National Endowment for Science, Technology and the Arts (NESTA).
Illustrations were by Andy Sykes, an award winning illustrator and animator from Leeds. He has been awarded a Pocket Shorts commission to make a film for mobile phones, and the commissioned film won a runner up prize at the Texas DCI Mobile Content Festival and has been screened at festivals in Bristol, Rotterdam and San Francisco, as well as on BBC2's Homegrown Hollywood.
The Snakes and Ladders models were made by Richard Dawson, an experienced TV set designer who works on Casualty and Life on Mars.
March 13, 2006
"The Real": On the Border of Butler and Dowden in Second Life
Ars Virtua Gallery and New Media Center is looking for works for our inaugural show. The theme of this show is "The Real" and will be exhibited on the grounds of Ars Virtua which is located on the border of Butler and Dowden in Second Life.
We are looking for 2D media, video and sculpture (including scripts) produced within the 3D engine. All representable media will be accepted for consideration but artists are cautioned to be economical with the number of prims used in sculpture.
For too long "the virtual" has been supplanted by "the real" in the realm of communication and entertainment. We recognize that there is no need for replacement, but for extension. We see that 3D game engines are creating new environments with new rules that are just as tangible as the old ones, but on new terms. Education and art have been waking up to value of simulation as it relates to and does not relate to campus and museum life. The value of simulation or perhaps the threat of it occurs when simulation begins to trump that which it is simulating. That is the purpose of this exhibit, and though it does not make every exhibit in space-time useless or passé it does attempt to offer a wholly electronic alternative, an "other" real.
"The Real" will be juried by a group of artists from the CADRE Laboratory for New Media.
Please submit files via email in the following formats:
jpeg - no larger than 400x300;
mp4 - no larger than 1mb;
descriptions- no more than 250 words
All submissions and requests for more information should be sent to gallery[at]ArsVirtua.com
Letter of interest due: March 14; Opening: April (TBA)
March 08, 2006
Augmented Reality on i-mate SP5
An Extended Version of the Real World?
"Daniel Wagner working on some innovative augmented reality projects. He has been involved in a project you may already have heard of called The Invisible Train, and now he has created a nice demo on an i-mate SP5 Windows Mobile Phone.
In this demo (photo to the left), the character on the display moves around based on the location, angle, etc of the phone. With the phone facing the marker, you can move around and view the character from different angles. If you turn the phone away from the marker, the character moves off the screen. This is an amazing display of Augmented Reality!
"The basic idea of Augmented Reality (AR) is that is lets people share an extended version of the "real" world. Virtual objects can show up in real space that can be viewed an manipulated by more than one person (which is often a problem in Virtual Reality).
Until now AR is usually done with expensive hardware using HMDs and PCs or notebooks for mobile setups. We focus in our work on using low-cost, mobile hardware that people ideally already possess (PDAs, mobile phones), because we believe that this is one of the few chances how AR can leave research labs and go into public." [...]
Daniel Wagner is a computer science doctoral candidate and received his MSc from Vienna University of Technology. During his studies he worked as a contract programmer and joined Reality2, developing VR software. After finishing his Computer Science studies, Daniel was hired as a lead developer by BinaryBee working on high-quality single- and multi-user webgames. Next he was employed as a developer for Tisc Media, doing 3D engine development. Recently Daniel was hired as a consultant by Greentube for the development of "Ski Challenge ’05". Daniel currently has a job as a researcher at Graz University of Technology, doing his PhD thesis on truly mobile Augmented Reality. His current research interests are real-time graphics and massively multi-user augmented reality on mobile devices." [via Mobility Site]
February 23, 2006
The Guilt Eye
Human Avatars, a multimedia installation created by Andrea Zapp with the Vini Reilly's music, plays on two classic levels: the real and the virtual. The visitors have to walk through the installation's space, discovering a small wood cabin, which they are asked to enter. Once inside, the bodies are shot and projected into a scaled down model of the same cabin. Peeking through a small window the visitors can look at the movements of their own projections and at others made by other persons. The crossed game of shooting and projecting pushes everybody to have a visual contact with a different 'self', that could equally be a real person or his own avatar, assuming the role of a spy and, unconsciously, of a person kept under surveillance. Despite the architectures and the scenario are friendly and accessible, the interactive experience immediacy remains pending and ambiguous. Probably one of the inhibiting elements is the same interactive fulcrum. The eye and its artificial replacement often carry uneasiness, because they are connected to the surveillance and control feeling. Furthermore, in this case, the holistic lyricism, made by the consciousness raising of one's own being in the world is suffocated by the voyer's sense of guilt. This role is evidently assumed by the observer, enhanced because it's not a hidden spying activity, but an a vile act, made for all the world to see." Francesca Tomassini, Neural.
February 09, 2006
Epidemic Menace is a pervasive crossmedia game. Crossmedia games focus on a wide variety of gaming devices including traditional media channels, game consoles as well as mobile and pervasive computing technology to allow for a broad variety of game experiences. The overall goal of the game is to prevent the virus from escaping the campus, to clear the campus from the virus instances, create an antivirus to save Prof. Ivy Miller and to find out what happened on the campus. The winner is the team that that leads the high score list.
The game consists of two competing teams and each team holds a minimum of 5 players. Each team receives at the beginning of the game: 5 smart phones (one for each player); 1 Palmpilot; 1 AR system; 5 special online accounts (one for each player). Players can change devices or play modes anytime and can play on their own if they wish but have clear advantages through collaboration.
Game Space: The game is played in a physical and a virtual game space and teams have to play in both spaces in order to extinguish the virus. The virtual game space will be a model of the physical game space and game appearance and game mechanics will be adapted in the physical and the virtual game space. For example, the virus appears differently in both spaces.
In the physical world the virus may appear as: Spatialized sound, 2D Map, based display of viruses in proximity, Overlaid 3D graphics on AR devices, And in the virtual world the virus may appear as: Sound, Animated 2D and 3D graphics.
Storyline: The Dean of the Schloss Birlinghoven medical research laboratory Prof. Dr. Wolfgang Mathiessen has been working on a epidemic prevention program called EEPA (European Epidemic Prevention Association) for years and is very close to a mayor breakthrough that would have revolutionized the virus simulation programmes used by medical institutes around the world when the most part of his work is stolen and copied by a villain. The villain uses his work to infect a defined area of the Schloss Birlinghoven complex, his motives and his approach are unknown. The situation escalates when one of the medical researchers Prof. Ivy Miller is infected by the virus and falls into a coma. The EEPA is informed and a small group of experts is send to Schloss Birlinghoven to take care of the situation. [via pasta and vinegar] Report about the Crossmedia Game Epidemic Menace by Jan Ohlenburg1, Irma Lindt1, and Uta Pankoke-Babatz1 [PDF]
February 07, 2006
Siemens launches "Kick Real" cameraphone game
Physical Interface/AR for Mobile Phone Games
"With World Cup fever sweeping Germany (or at least sweeping the tournament's sponsors), Siemens has come up with a novel way for cellphone users to practice their kicks (which we think is at least a little ironic, given that Siemens doesn't even make cellphones anymore). The company's "Kick Real" game provides cameraphone users with a virtual soccer field; point the camera at your foot, and you can kick a virtual ball that you view on the phone's LCD. We can't wait for this to catch on, so we can get our own kicks watching gamers staring into their phones and kicking the air." [blogged by Kellee USC Interactive Media Division Weblog]
February 06, 2006
Making the Virtual Real
reBang(ing) via 3D Printers
As we move into the fabrication future, we'll see a surprising cross-over between the skills of virtual world designers and the skills of designers of physical objects.
We're all familiar by now with the idea of real money being used to buy virtual goods, and even with virtual money being used to buy physical goods. The intersection of online worlds and the real world doesn't stop there, however. It turns out that the increasing detail of 3D objects in virtual environments makes it possible to think of them not simply as game objects, but as digital prototypes -- and 3D printers are the tool of choice for turning the prototypes into real objects. WorldChanging ally Csven Johnson is at the forefront of this movement. On his blog reBang, he discusses his efforts to convert the game data for objects into CAD data usable with rapid prototyping hardware.
The metaverse is not just an ethereal “storyteller’s” world. It’s a world comprised of data. Just look at the reasons Marketing people are salivating over it. The tracking data is orders of magnitude better than trying to count eyeballs watching a television screen. And in a 3D interface (which is what those videogames really are), that data goes well beyond just “hits” or “click-throughs”- it’s comprised of “vectors” and “3D positional data”. And here’s the important part: that data can be converted into more than just marketing statistics. It can be converted into real product; something you can hold… in the flesh. The Story made Real.
The image above [At left here -- Jamais] is a screen capture from Pro/ENGINEER CAD, perhaps the most widely used product development 3D application for design and manufacturing. That object is a piece of a virtual game object “captured” from id’s Quake 3 videogame (the barrel of a Rocket Launcher). It was not created in my CAD application. It was not ripped from the game files. I “hijacked” the data streaming to my monitor using a freely available tool. And now, if I desired, I could manipulate the data and create a real product...
...Don't get tripped up by the example being a "rocket launcher" that wouldn't work in the real world. The tools for creating virtual objects in or for games have become startlingly sophisticated, while in many cases becoming much easier to use. Some games, like Second Life, allow players to craft complex goods, from weapons to furniture to clothing, as well as design buildings. A home 3D printer of the near future may not be able to (or allowed to) print out a working gun -- and I would expect few people would complain about that -- but it almost certainly would be able to print out a chair, and probably be able to print out some kinds of clothing... [Posted by Jamais Cascio on WorldChanging]
OGLE and the Second Life Avatar
Having played with the OGLE tool for a few days (version 1b), I’ve learned some things that might be helpful to those hoping to have their customized Second Life avatars or objects fabricated. My own goal is to take the avatar geometry and convert it to “solid” CAD data, but the process I’m using to get there might be useful even if your goal is to just send out an .obj or .stl polymesh file.
First off, be aware that the captured videostream data most likely isn’t “clean”… even if it looks good. Be sure to search for duplicate triangles/polymeshes since there will probably be plenty. After deleting any duplicates, be sure to select and “merge” all the remaining polygon edges and vertices since there will almost certainly be multiples of those as well... [blogged by Csven Johnson on reBang]
January 26, 2006
John Seely Brown on the End of Cyberspace
The Disappearance of the Interface
John Seely Brown is a former Chief Scientist of Xerox PARC, and coauthor of two recent excellent books: The Social Life of Infomation (with Paul Duguid), and The Only Sustainable Edge (with John Hagel). John answered the big question in a more philosophical vein than Ross Mayfield:
Cyberspace is an outmoded term. Let's consider as an alternative The Informated World, a world where the virtual and physical boundaries have become blurred and the virtual and physical worlds dance together and enhance each other.
Mark Weiser's vision of ubiquitous computing was a start down this phenomenological path where the concept of 'ready-at-hand' now wondrously crossed the physical/virtual boundary. Ideally, we all sought out a state of being where much like as in Heidegger's story, the blind man sitting feels the handle of the cane but once he starts walking the handle disappears and he feels as if he were directly touching the world.
Likewise, in the informated world, the interface disappears and we feel we can touch the augmented world directly.
Personally, I think this idea of technologies merging with us-- not in the sci-fi implants kind of way, but merging through interaction and familiarity, the way a bicycle or really good pen can become an extension or expression of our bodies-- is an important one to highlight. In my view, one reason cyberspace made sense for so long was that our interactions with computers supported the idea of The World being separate from The Matrix, with only the monitor joining the two together. Brown points out that as technologies change, the character of our experience with them changes; and thus our sense of the world-- and of alternate digital worlds-- inevitably changes as well. [posted by Alex Soojung-Kim Pang on The End of Cyberspace]
January 13, 2006
...and more thoughts on the End of Cyberspace
Anyone who's ever seen a book has seen ISBNs, or International Standard Book Numbers -- that string of ten digits, right above the bar code, that uniquely identifies a given title. Now come ESBNs, or Electronic Standard Book Numbers, which you'd expect would be just like ISBNs, only for electronic books. And you'd be right, but only partly. ESBNs, which just came into existence this year, uniquely identify not only an electronic title, but each individual copy, stream, or download of that title. Like little tracking devices that publishers can embed in their content. And not just books, but music, video or any other discrete media form -- ESBNs are media-agnostic.
"It's all part of the attempt to impose the restrictions of the physical on the digital, enforcing scarcity where there is none," David Weinberger rightly observes. On the net, it's not so much a matter of who has the book, but who is reading the book -- who is at the book. It's not a copy, it's more like a place. But cyberspace blurs that distinction. As Alex Pang explains, cyberspace is still a place to which we must travel. Going there has become much easier and much faster, but we are still visitors, not natives. We begin and end in the physical world, at a concrete terminal.
When I snap shut my laptop, I disconnect. I am back in the world. And it is that instantaneous moment of travel, that light-speed jump, that has unleashed the reams and decibels of anguished debate over intellectual property in the digital era. A sort of conceptual jetlag. Culture shock. The travel metaphors begin to falter, but the point is that we are talking about things confused during travel from one world to another. Discombobulation.
This jetlag creates a schism in how we treat and consume media. When we're connected to the net, we're not concerned with copies we may or may not own. What matters is access to the material. The copy is immaterial. It's here, there, and everywhere, as the poet said. But when you're offline, physical possession of copies, digital or otherwise, becomes important again. If you don't have it in your hand, or a local copy on your desktop then you cannot experience it. It's as simple as that. ESBNs are a byproduct of this jetlag. They seek to carry the guarantees of the physical world like luggage into the virtual world of cyberspace.
But when that distinction is erased, when connection to the network becomes ubiquitous and constant (as is generally predicted), a pervasive layer over all private and public space, keeping pace with all our movements, then the idea of digital "copies" will be effectively dead. As will the idea of cyberspace. The virtual world and the actual world will be one.
For publishers and IP lawyers, this will simplify matters greatly. Take, for example, webmail. For the past few years, I have relied exclusively on webmail with no local client on my machine. This means that when I'm offline, I have no mail (unless I go to the trouble of making copies of individual messages or printouts). As a consequence, I've stopped thinking of my correspondence in terms of copies. I think of it in terms of being there, of being "on my email" -- or not. Soon that will be the way I think of most, if not all, digital media -- in terms of access and services, not copies.
But in terms of perception, the end of cyberspace is not so simple. When the last actual-to-virtual transport service officially shuts down -- when the line between worlds is completely erased -- we will still be left, as human beings, with a desire to travel to places beyond our immediate perception. As Sol Gaitan describes it in a brilliant comment to yesterday's "end of cyberspace" post:
In the West, the desire to blur the line, the need to access the "other side," took artists to try opium, absinth, kef, and peyote. The symbolists crossed the line and brought back dada, surrealism, and other manifestations of worlds that until then had been held at bay but that were all there. The virtual is part of the actual, "we, or objects acting on our behalf are online all the time." Never though of that in such terms, but it's true, and very exciting. It potentially enriches my reality. As with a book, contents become alive through the reader/user, otherwise the book is a dead, or dormant, object. So, my e-mail, the blogs I read, the Web, are online all the time, but it's through me that they become concrete, a perceived reality. Yes, we read differently because texts grow, move, and evolve, while we are away and "the object" is closed. But, we still need to read them. Esse rerum est percipi.
Just the other night I saw a fantastic performance of Allen Ginsberg's Howl that took the poem -- which I'd always found alluring but ultimately remote on the page -- and, through the conjury of five actors, made it concrete, a perceived reality. I dug Ginsburg's words. I downloaded them, as if across time. I was in cyberspace, but with sweat and pheremones. The Beats, too, sought sublimity -- transport to a virtual world. So, too, did the cyberpunks in the net's early days. So, too, did early Christian monastics, an analogy that Pang draws:
...cyberspace expresses a desire to transcend the world; Web 2.0 is about engaging with it. The early inhabitants of cyberspace were like the early Church monastics, who sought to serve God by going into the desert and escaping the temptations and distractions of the world and the flesh. The vision of Web 2.0, in contrast, is more Franciscan: one of engagement with and improvement of the world, not escape from it.
The end of cyberspace may mean the fusion of real and virtual worlds, another layer of a massively mediated existence. And this raises many questions about what is real and how, or if, that matters. But the end of cyberspace, despite all the sweeping gospel of Web 2.0, continuous computing, urban computing etc., also signals the beginning of something terribly mundane. Networks of fiber and digits are still human networks, prone to corruption and virtue alike. A virtual environment is still a natural environment. The extraordinary, in time, becomes ordinary. And undoubtedly we will still search for lines to cross. [blogged by Ben Vershbow on The Future of the Book]
December 14, 2005
Found Through 'Lost'
Found Connections--by Michael Ang, David Bamford, Joshua Dickens and Karl Channell--allows you to re-experience the narrative of Lost by interacting with other individuals in the physical space and discover your hidden connections through augmented reality. Hold up your fan (with a 4x4" tracker marker printed on it) in front of our installation. Live video is augmented with graphics which reveal a virtual character from the TV show Lost. When more than one character is on the screen, the connections between those characters are visualized between them on the screen. Reverse the fan to find out more details about your connections.
Found Connections is an interactive exploration of social interaction and connections between people using the characters and storylines of the television show Lost as the content. The show Lost is rich with mystery, and hidden meanings; everything from the multiple sides of different characters to the paranormal activity on the island. We want to create the same type of mystery within the Winter Show by giving visitors a simple fan that has hidden content within it that's revealed in front of our installation.
Through augmented reality software these connections are made visible on a plasma screen (or possibly projection) display. The tag on the user's fan turns into a graphic of a character from the TV show and once other users stand in front of the same screen, lines are drawn connecting them as audio, text bubbles and video content from the show are triggered based on which characters are interacting.
This interest viewers based on the interesting content of Lost by really allowing them to take on the role of their character form the show. On another level it also spurs real life interactions with whoever the viewer happens to have a connections with. It reveals hidden connections through augmented reality using lost and also connections that person may happen to have with someone around them.
Part of the ITP Winter 2005 Show.
December 07, 2005
Smart Jigsaw Puzzle Assistant
RFID and Games
"ABSTRACT: Radio Frequency Identification (RFID) can be used as technology for coupling the physical and the virtual world in ubiquitous computing environments. In this paper, we show that RFID is also a very interesting option for building augmented real-world games. First, we describe several advantages of employing RFID technology for the development of gaming applications. Then we present the Smart Jigsaw Puzzle Assistant, a fully operational augmented jigsaw puzzle game which we have developed and prototypically implemented using miniature RFID tags and a palm-sized RFID scanner." From The Smart Jigsaw Puzzle Assistant: Using RFID Technology for Building Augmented Real-World Games by Jürgen Bohn.
December 02, 2005
Real and Virtual Worlds Collide
Neon Racer is a multi-user Augmented Reality racing game on an AR tabletop setting. The game displays only the players’ racing vehicles and the checkpoints. The active setting for the game is provided by the physical world, and all its parts can influence gameplay. Physical objects act as collision obstacles and influence the course of the race itself. Participants have to interact with both the virtual and real objects to succeed.
The position and edges of physical objects are detected using a camera and natural feature tracking. Players have to maneuver their vehicles past these objects and through the checkpoints.
Developed by Markus Weilguny, Doris Bernert, Wolfgang Litzlbauer, Ines Stuppacher and Manuela Waldner. [blogged by Regine on we-make-money-not-art]
December 01, 2005
Bluring the Boundary Between Gaming and Reality
"At the same time as people are diving into game worlds, the games are starting to invade the real world. You're moving through the streets of Melbourne stalking your quarry. A phone call on your GPS mobile phone tells you your target is only a few streets away. A direct kill means boosting your team's score. And in this game, winning is everything. Welcome to the world of "real-life" games that blur the boundary between gaming and reality.
Last year the classic arcade game Pacman came to life on the streets of New York. A player dressed as Pacman ran around Manhattan collecting virtual "dots" while trying to evade four players dressed as ghosts. Each player had a human controller back at base who monitored their progress online and phoned through strategy and advice.
A few months later, a lab at Singapore National University had developed a version of the same game using GPS and motion sensors to track players through the city's streets. This time, players could see the game overlaid on the real world through special goggles called augmented-reality headsets." Continue reading Street games by Fran Molloy, The Sydney Morning Herald.
November 23, 2005
Zones Light Up via Electronic Caresses
Interesting concept of wearable gaming device. TechnoLust blends technological lust with a program to reawaken carnal lust. The gaming device takes advantage of wireless Internet technology. Players wear it as an undergarment to travel out of the virtual world of the computer to find and play with opponents
TecnoLust is a game of body erotic zones that are lit up by electric caresses. Upon entering the game players select the mode they want to play within, this sends out a frequency which attracts other players set on the same game and frequency. When a participant encounters another a game of electronic caresses begins where pulses are sent back and forth electronically to arouse the senses. Designed in 2002 by Peter Allen and Carla Murray, KnoWear. [blogged by Regine on we-make-money-not]
Objects of Virtual Desire
Highlighting the Materiality of the Immaterial
Objects of Virtual Desire explores immaterial production in a virtual world, and if and how this can be transferred into an economy of material production. We have collected a series of objects produced and owned by inhabitants in the online world Second Life and will sell physical reproductions of these objects via our web shop.
Each chosen object has a strong sentimental value for the avatar (a persons virtual identity) who made or owned it. We have acquired (copies of) these objects, along with their owner’s personal story, within the in-world economy of Second Life. At www.objectsofvirtualdesire.com we present 3D modeled reproductions of these objects. The objects are made available in physical form on demand. When buying the materialised object one will also receive an image of the immaterial object with its original Second Life owner.
Our interest lies in exploring the concept of product design in a virtual world and what kind of interpersonal value objects carry in this context. Further questions are raised by transferring these objects to physical space and a "first life" economy. What is immaterial value-creation and can it be materialised? What does it mean to use a virtual world as a site of production?
The issues raised are relevant in a wider context, as value-production in the "post-fordist" era has become increasingly immaterial. Nike, for example, exploits the physical function of a shoe to create and market immaterial values, so pervasive that the shoe itself becomes almost virtual.
Objects of Virtual Desire exploits the augmented value of immaterial objects to create and market tangible products, thereby reversing the process and highlighting the materiality of the immaterial.
October 31, 2005
Gamer Buys Virtual Space Station
"A virtual space resort being built in the online role-playing game, Project Entropia, has been snapped up for $100,000 (£56,200). Jon Jacobs, aka Neverdie, won the auction for the as yet unnamed resort in the game, which lets thousands of players interact with each other. Entropia also allows gamers to buy and sell virtual items using real cash.
The space station is billed as a "pleasure paradise". Last year, a gamer bought an island for $26,500 (£13,700). The space station is described as a "monumental project" in the "treacherous, but mineral rich" Paradise V Asteroid Belt and comes with mining and hunting taxation rights. With the price tag also comes mall shopping booth and market stall owner deeds, a land management system, a billboard marketing system, and space station naming rights. Neverdie is a popular and well-known in-game character. He and another character, Island Girl, appeared in a 2003 dance music movie Hey DJ!, which starred Jon Jacobs, Charlotte Lewis, and Tina Leiu..." From Gamer buys virtual space station, BBC News. [Related]
October 27, 2005
A Short History of Virtual Hiking
Algorithm as Hiker
A Short History of Virtual Hiking + video by Brett Stalbaum: A virtual hiker is an algorithm that produces computationally derived paths from data in such a way that allows them to be re-followed through the actual world. The virtual hikers that are included in the C5 Landscape Database, beginning with version 2.0, include various Least Cost Path hikers and a Slope Reduction hiker based on a natural selection algorithm.
The first attempt to follow a virtual hiker through a real landscape was performed by C5 on April 9th 2005 near Dunsmuir California as part of the quest to discover the *Other Path* of the Great Wall of China in California, or as it is now known, simply the Great Wall of California. After a rigorous insertion hike and facing both fading daylight and rapid waters flowing through necessary water crossings, C5 was only able reach the beginning of the Great Wall's other path. The visual comparison of the China terrain and its California other were satisfyingly documented, even through it was impossible to actually walk in the footsteps of the virtual hiker. (C5 personnel are Joel Slayton, Steve Durie, Geri Wittig, Jack Toolin, Brett Stalbaum, Bruce Gardner, Amul Goswamy and Matt Mays.)
The second attempts to follow a virtual hiker were performed by Paula Poole and Brett Stalbaum using C5-developed software in the Anza Borrego desert of Southern California. On May 28th 2005, we attempted to follow the stepwise 3 degree Least Cost virtual hiker from Agua Caliente Springs to the Inner Pasture. An earlier scouting mission had revealed that part of the LCP path dead ended in a box canyon, but some probative scouting revealed a saddle over which the canyon could be bypassed. Even though this would cause a small divergence from the course, we proceeded to try the full hike. Unfortunately, the virtual hiker's track also led over a steep talus slope. While the path was not impossible to traverse due of the severity of the slope alone, the combination of loose talus and the many agave plants, cholla and barrel cactus in the area presented painful safety challenges. The idea of following the LCP path to Inner Pasture was abandoned after Brett slipped and fell, spearing his arm on an agave.
Realizing that most paths in the area were probably untenable due to the floristic nature of the Anza Borrego desert and its many sharp plants including the beautiful ocotillo, jumping and teddy bear cholla, it was decided to follow the nominal foot path to the Inner Pasture known as Moonlight Canyon. While both the LCP hiker and Slope Reduction Virtual Hiker utilized parts of Moonlight Canyon, they diverged enough that the claim to have followed the virtual hikers could not be sustained. Interestingly, however the virtual hikers did traverse parts of Moonlight Canyon.
The desert mountain ranges of the Great Basin provide much less in the way of spiny botanical hazards than do the Sonoran desert. A scouting mission including Brett, Paula and Naomi Spellman was performed on June 18 2005 to evaluate the terrain, and During the Locative Media in the Wild Workshop at the White Mountain Research Station Crooked Creek Facility, July 22nd of 2005, Brett, Naomi, Kimberlee Chambers and Nico Tripcevich became the first to actually successfully follow the path of both a Three Degree Least Cost Path hiker and a Slope Reduction hiker. True to form, the LCP path followed a waterway, and the Slope Reduction Path discovered a surprising and unexpectedly easier path than the non-computational path that had originally been scoped out on June 18th. Experiments with virtual hikers are ongoing.
October 25, 2005
Portable Augmented Reality
Computer monitors are by no means an endangered species, but it is becoming increasingly apparent that they will soon be replaced for many uses by floors, walls and table tops. The latest evidence: a Microsoft Research scientist has developed a projector and computer vision system dubbed PlayAnywhere that projects interactive computer-generated images without the need for specially mounted cameras.
Researchers have been reducing the cost and complexity of the augmented reality systems in recent years. (See PCs augment reality, TRN June 26/July 3, 2002). The PlayAnywhere system goes further by packaging the components into a single portable unit that doesn't require calibration. The system consists of an NEC tabletop projector, an infrared light source, an infrared camera and a computer. The device projects a 40-inch diagonal image onto the surface it stands on.
Computer vision techniques allow users to use their hands to move, rotate and scale projected virtual objects. The system tracks shadows to determine where fingertips touch the surface; frame-to-frame pixel-level changes determine hand motion. The system also keeps track of sheets of paper in its view and can project images onto them.
The projector system could be used for games, educational software and other interactive graphical computer applications.
(PlayAnywhere: A Compact Interactive Tabletop Projection-Vision System, Symposium on User Interface Software and Technology (UIST 2005), Seattle, October 23-26, 2005) [posted on Technology Research News Roundup]
October 18, 2005
on-the-move interaction with everyday objects
Can we merge the virtual world with the physical world? ReachMedia--by Assaf Feldman, Sajid Sadi, Emmanuel Munguia Tapia--is a system for seamlessly providing just-in-time information about everyday objects. It is built around a wireless wristband with an RFID reader and accelerometers. The wristband detects physical objects that the user is interacting with, and retrieves relevant and personalized information via a smart phone. The user can then have a hands and eyes free interaction with the application by using a unique combination of slight gestural input and audio output. See Bracelet Navigates Net, Technology Review News. Download PowerPoint presentation. [via]
October 14, 2005
Exploring User Centred Applications for NFC and RFID
Touch is a research project at the Interaction Design department at the Oslo School of Architecture and Design. Touch takes a user-centred approach to Near Field Communication (NFC). NFC is a technology that enables connections between mobile phones and real-world objects: bridging the gap between the real and the virtual. The project offers the possibility of radically simplifying existing applications and providing a new spectrum of local services through the mobile phone. At AHO we have multiple disciplines, including interaction design, industrial design, urbanism and architecture; a group with significant interest in the areas possibilities of NFC technology.
October 06, 2005
Virtual Raft Project + Regrets
Two from UC DARNet
The Virtual Raft Project is a multidisciplinary undertaking seeking to create communities of believable autonomous characters that inhabit heterogeneous networks of computational devices. In particular, the project is interested in allowing the characters to break the plane of the traditional desktop screen. To this end, we have designed an interactive installation featuring a novel tangible paradigm for interacting with the characters. This paradigm involves the use of a mobile device, such as a Tablet PC or handheld computer, as a “virtual raft” by which a character may be transported among several virtual worlds. By enabling the character on the raft to react in real time to the raft’s motion in real space, this installation encourages participants to become physically engaged with virtual characters. We believe that this physical engagement can lead to an increase in the believability of the characters.
Share the Burden
Regrets consists of six to ten purpose-built mobile computer stations publicly located in and around Cambridge collect anonymously submitted regrets from the public to comprise a sociological database of contemporary remorse. Instant feedback to the individual user based on other contributors' similar concerns is algorithmically generated and calculated to 'share the burden'. Random selections and groupings of the regrets are made public across the city through existing signage and broadcast facilities. By engaging users in revelations of a problematic but constructive nature, we aim to bring specificities of individual lives, in this case personal regrets, into the realm of public debate, shared learning, and community.
The UC Digital Arts Research Network (UC DARNet) is an interdisciplinary Multicampus Research Group of University of California faculty who utilize digital media for cultural and theoretical research and in their creative production. As an ad-hoc planning group, UC DARNet has been meeting since 1997 to lay the foundation for a UC-wide program to facilitate collaborative research and teaching within a distributed digital arts and humanities community.
Culture is in the midst of an increasingly rapid shift to computer-mediated forms of creative production, distribution and communication. The role of digital media is fundamental to this shift. Digital Artists create a natural bridge across the traditional disciplinary divide between the humanities and the sciences. Interdisciplinary dialogue geared toward creative production and programmatic development will be facilitated by UC DARNet, providing an opportunity for critical engagement and conceptual dialogue between humanists, scientists, and those in the digital arts. A rotating group of UC digital arts and new media faculty act as principal investigators of UC DARNet and as the group's advisory committee.
GOALS AND OBJECTIVES
UC DARNet will, over a five year period, work to:
Serve to bridge counterproductive gaps between the arts, humanities, and sciences;
Enhance students' educational experience by providing access to faculty across the entire UC system;
Engage in experimentation and prototyping of distributed network environments which will be of value to a range of research interests and educational models; and, finally;
Help to establish UC as a leading institution for developing the new modalities of digital culture. UC DARNet will establish a strong UC presence outside of California through conferences, events, workshops, exhibitions, and ongoing online and offline dialogues. It is also actively planning to have research and development activities facilitate distance learning, and technology access to underprivileged middle school and high school children in the Southern California.
Funded by the University of California's Office of Research, UC DARNet pursues its programs and research activites with matching funding from the participant campuses, partner institutions and organizations, industry sponsors, corporations and foundations.
October 05, 2005
Transactions of CHI:
Can You See Me Now?
"We present a study of a mobile mixed reality game called Can You See Me Now? in which online players are chased through a virtual of a city by ‘runners’ (i.e., professional performers equipped with GPS and WiFi technologies) who have to run through the actual city streets in order to catch the players. We present an ethnographic study of the game as it toured through two different cities that draws upon video recordings of online players, runners, technical support crew, and also on system logs of text communication. Our study reveals the diverse ways in which online players experienced the uncertainties inherent in GPS and WiFi, including being mostly unaware of them, but sometimes seeing them as problems, or treating the as a designed feature of the game, and even occasionally exploiting them within gameplay. In contrast, the runners and technical crew were fully aware of these uncertainties and continually battled against them through an ongoing and distributed process of orchestration.
As a result, we encourage designers to deal with such uncertainties as a fundamental characteristic of location-based experiences rather than treating them as exceptions or bugs that might be ironed out in the future. We argue that designers should explicitly consider four potential states of being of a mobile participant – connected and tracked, connected but not tracked, tracked but not connected, and neither connected nor tracked. We then introduce five strategies that might be used to deal with uncertainty in these different states for different kinds of participant: remove it, hide it, manage it, reveal it and exploit it. Finally, we present proposals for new orchestration interfaces that reveal the ‘seams’ in the underlying technical infrastructure by visualizing the recent performance of GPS and WiFi and predicting the likely future performance of GPS." From Can You See Me Now? by Steve Benford, Andy Crabtree, Martin Flintham, Adam Drozd, Rob Anastasi and Mark Paxton + Nick Tandavanitj, Matt Adams and Ju Row-Farr
October 03, 2005
PLAN & Mixed Reality Laboratory
Discussion at Trampoline
The Pervasive and Locative Arts Network and Mixed Reality Lab, Nottingham University will be holding a presentation as part of TRAMPOLINE on Thursday 6th October at 8.30pm, Broadway Mezz Bar, Broadway Cinema, Nottingham. Representatives of PLAN and MRL Drew Hemment, Ben Russell and Steven Benford will be discussing their recent activities in encouraging discourse between artists, scientists and engineers in the field of pervasive and locative media. Come along to this discussion and find out more about the innovative research, development and artistic production which is being pursued in this area.
PLAN: A new international and interdisciplinary research network in pervasive media and locative media has been funded as part of the Engineering and Physical Sciences Research Council (EPSRC) Culture & Creativity programme. The network will bring together practicing artists, technology developers and ethnographers with the aim of advancing interdisciplinary understanding and building consortia for future collaborative projects. It will be of relevance to people working in the arts, games, education, tourism, heritage, science and engineering.
The network will stage three major gatherings. Each gathering will have a distinct form and focus: an initial workshop to launch the network and assess the state of the art; a technology summer camp for artists and technologists, including hands-on prototyping sessions using the facilities at Nottingham's Mixed reality Laboratory; and a major public conference and participatory exhibition as a central component of the Futuresonic 2006 festival in Manchester; as well as a supporting web site and other resources.
Mixed Reality Lab: The Mixed Reality Laboratory (MRL) is an interdisciplinary research initiative at the University of Nottingham. The MRL brings together leading researchers from the Schools of Computer Science, Engineering and Psychology to research mixed reality - new technologies that merge the physical and digital worlds. The MRL is focused on the development and application of mixed reality to visualisation, learning, knowledge management, control systems, ethnomethodological studies, leisure and co-operative work.
October 01, 2005
Orchestrating a Mixed Reality Performance
"Desert Rain is a game, a performance and an installation created by Blast Theory in collaboration with the Mixed Reality Lab at the University of Nottingham. Commissioned by Contemporary Archives in Nottingham, the ZKM (Centre for Art and Media) in Karlsruhe in association with DA2 in Bristol and the KTH (Royal Technical Institute) in Stockholm, the piece was premiered in 1999. To date it has been presented in Nottingham, Karlsruhe, London, Manchester, Stockholm, Rotterdam, Glasgow, Bristol, Prague and Middlesborough. Desert Rain was shortlisted for an Interactive Arts BAFTA in 2000.
This essay draws on ethnographic surveys using interviews, video recordings and direct observation to explore what techniques are used for orchestrating the participant's experience in Desert Rain." Chapter 12: Orchestrating a Mixed Reality Performance - Desert Rain by Boriana Koleva, Ian Taylor, Steve Benford, Mike Fraser, Chris Greenhalgh, Holger Schnädelbach, School of Computer Science, The University of Nottingham; Dirk vom Lehn, Christian Heath, The Management Centre, Kings College London; and Ju Row Farr, Matt Adams, Blast Theory UK, A Guide to Good Practice in Collaborative Working Methods and New Media Tools Creation.
September 29, 2005
Computer Users Move Themselves with the Mind
"Computer scientists have created a hat that can read your thoughts. It allows you to stroll down a virtual street. All you have to do is think about walking.
Called a brain-computer interface, the device detects activity in certain brain areas linked to movement, and uses the signals to mimic that movement in a virtual world. The technology could one day help paralysed patients to move robotic arms, or help sufferers of motor neuron disease to type out words on a virtual keyboard.
"Just thinking about movement activates the same neurons as actually moving," explains Gert Pfurtscheller of Graz University of Technology in Austria, who has been working on the device for around four years. By picking up on these bursts of nerve activity, the computer can decide whether you are thinking about moving your hands or feet, and react accordingly..." From Computer users move themselves with the mind by Michael Hopkin, nature.com, 27 September 2005.
September 19, 2005
The Hunt for Mr. X:
Bringing a Board Game to the Street
"Scotland Yard" is a popular board game in Europe, where players have to catch Mr. X. Mr. X moves hidden through the streets of London. He has to show his location every 4th turn. The detectives know which kind of transportation he uses. With this information they have to figure out his position and surround him. When the detectives catch him, they win, if Mr. X escapes he wins.
Bringing the game to the street: transformation into a live action game 4 groups with 4 players hunt Mr. X and Mr. Y in the old city of Berne. Hunting by MMS: Every ten minutes, Mr. X sends a picture of his current location. The detectives have to identify the position by the pictures and find Mr. X. They catch Mr. X by taking a picture of him." From "The Hunt for Mr. X: Bringing a Board Game to the Street [PPT] by Niklaus Moor (Swisscom Innovations). [via techkwondo] [Related]
the Space Between the Physical and the Virtual
Cognitive Agents in 3D Virtual Worlds
"Abstract: We present an agent-based model of virtual worlds in which the objects in the world have agency, that is, the objects can sense their environment, reason about their goals, and make changes to the environment. The agent-based model has the following reasoning processes: interpretation, hypothesizing, and action activation. This agent model is described and illustrated using a wall agent in a multi-user virtual world. We extend the illustration through a demonstration of a multi-agent world in which many of the objects in the world interactively reason about the use of the world and respond specifically to the people in the world." From Cognitive Agents in 3D Virtual Worlds by ML Maher, JS Gero, G Smith, N Gu, University of Sydney, Australia; International of Design Computing, Vol 6, 2003.
"Abstract: The cyberPRINT is a fully immersive, interactive virtual environment that is being generated in real time based on physiological data readings of a human body. In other words, the cyberPRINT is based on creating interfaces between physical and digital spaces and between biology and information technologies. The cyberPRINT is also an event, wherein a performer is connected to the cyberPRINT generator to create a self-sustaining feedback mechanism. Although the use of the body to electronically drive music and media events is not new, most of these works have paid little or no attention to the potential of interactive 3D virtual environments. Nor have they been so technologically advanced, interdisciplinary intensive (involving Architecture, Choreography, Modern Dance, Music, Bioengineering, Medicine and Computer Science), or architecturally focused as the cyberPRINT.
This project covers a wide and fertile territory that goes from the very technical and design oriented to the very theoretical and interdisciplinary. This paper is intended to (1) expand what has been already published about this project (Bermudez et al 2000a) and (2) establish potential areas for discussion before and after the performance."
1. Introduction: Why a Live Performance?
This paper provides background for the live performance of the cyberPRINT, a real time, physiologic data-driven virtual architecture developed by an interdisciplinary team led by two architects during the past 5 years. The reason for this live performance and demonstration is simple. It is only through performance that we can show the true nature of the cyberPRINT. Such demonstration will also provide empirical proof of the theoretical claims and technological details already published elsewhere (Bermudez et al 2000a). In addition, this version of the cyberPRINT will add some novelties occurred since then (such as a new virtual world, data-driven music in real time, a navigational data-globe). Images of a live performance are shown in Figure 1. [via]
September 14, 2005
Anonymous French street artist Space Invader makes mosaics from the eponymous video game, which he puts up in public spaces of his victim city. His Invasions are part of a global project. Each attack is complemented by a map of the city showing all the target sites. The most recent invasion occured in Bangkok.
"...Currently the computer icon has led us to rediscover the prehistoric form of communication - the hieroglyph - through the use of icons in the extended landscape of the net, communication platforms, games, and their new iconology or visual vocabulary that has risen through the development of software programs. Andy Deck's, Gylphiti and Space Invader's Invasions create a context with graffiti that appropriates private space. Similar to graffiti, the hieroglyph writer marks on the subway and temple walls. Andy Deck's Glyphiti, is a multi-user collaborative drawing platform calling on the spirit of graffiti is a site specific project using code that filters through private sectors of the net, creating a public domain and a collaborative drawing space that recall the uses of prehistoric hieroglyphs with pixels replacing the mark, to create a real-time icon.
In Space Invader's invasion the attack is addressed in the public domain where the Space Invaders' mosaics lurk the cityscape, and the players are anonymous. Space Invader and Deck introduce the cross-pollination of new media, as the closest notion to public art production by depicting a position of narrative from the maker to the public space-whether it be the streets or the net-continually questioning the definition of PUBLIC. Space Invader uses the aesthetics of a Byzantine iconographer as an instrument through which a work is executed, a work that goes beyond the individual..." From Copy It, Steal It, Share It by Michele Thursz. [via]
Who's Photographing Whom?
Superstar is a multiplayer photo-based game designed for Ubicomp 2005, Tokyo. The game is free, and open for anyone with a phonecam and self-portrait Puri Kura stickers of themselves. The game uses Japanese Puri Kura stickers as a starting point for an experiment in social networks, automated phonecam image analysis, and urban visual culture. The goal is to see and be seen, using tiny images woven into the fabric of Tokyo streetlife.
To play, place your own stickers (with a star on it to recognize participants) wherever you want and collect the stickers of other players by shooting them with your phonecam. Whenever a player snaps a Superstar sticker both players earn points. A link is then created between the two players. From this point on, any time either player earns points (by shooting a new sticker or by having their sticker shot) the other one will also earn points (though not as many).
Superstar thus builds a network of connections that forms a social, pyramid scheme. Successful players will be the ones that forge connections with other active players. Automated image recognition (e.g. Mobot technology) will decipher who is photographing whom when players email their images to SuperstarHQ via their phonecam. A message is returned via SMS indicating points earned.
September 13, 2005
Real World 'Capture the Flag'
"ABSTRACT: We are developing a new multiplayer pervasive game, called The Drop, designed to be compelling to play and yet practical to deploy in real-world settings. In The Drop, two teams use mobile phones to play a version of “capture the flag,” where one team hides a virtual “briefcase” in a public place and the other team attempts to find it within a specified amount of time. If the team that is searching for the briefcase finds it within the game’s time limit, they win; otherwise, the team that hid the briefcase wins. In this article we explain how the game is played, then discuss the technical, social, and business challenges we have faced while creating and implementing it." From The Drop: Pragmatic Problems in the Design of a Compelling, Pervasive Game by I. Smith, S. Consolvo and A. Lamarca (2005). [via]
September 08, 2005
Cinema as Physical Experience
The Ladder--by John Gerrard--consists of two installation rooms, one containing either a large projected landscape, or depending on installation contexts, a window to outdoors. The main room is empty but for an overlayed virtual character only to be seen through a handheld viewing device. This installation room is plain apart from a window, cut high into the wall and giving a vista, across a space, to the landscape in the next room. The virtual character is most interested in the landscape and continually voices concerns and observations as to its nature from his position at the window, standing on a virtual ladder. This monologue, presented as a 3D soundscape and experienced through wireless headphones can be heard throughout the installation and unites the diverse elements within the piece.
The work exists at the faultlines opening between traditional notions of presence and place, blending spatial, generative and augmented elements, both virtual and real together into a work of contemporary art. The artist views the possibilites within augmented or mixed reality as a most important breakthrough in media art, allowing cinematic type installations to move from the 2D plane and inhabit the gallery. Through these strategies cinema can be recontextualised as an interactive and physical experience. [via]
September 06, 2005
Graspable Surrogates in Multiple Point-of-View Interactive Narratives
Today's story creators have the opportunity to bring digital stories back into our physical environment. Using wireless and tag-sensing technologies, interactive narratives can make use of rich, haptic interactions with physical objects, combining them with the flexibility of digital multimedia content processing and delivery.
The Tangible Viewpoints project explores how physical objects and augmented surfaces can be used as tangible embodiments of different character perspectives in a multiple point-of-view interactive narrative. These graspable surrogates provide a direct mode of navigation to the story world, helping to bridge the gap between the realms of bits and atoms within the field of multimedia storytelling.
Using a system for interacting with a character driven-narrative, the different segments of a multiple point-of-view story are organized according to the character viewpoint they represent, as well as their place in the overall narrative. These segments can consist of various types of media (video, audio, images, and text), and can present character development, action, and location with as much complexity as any scene of a film or chapter of a book.
September 05, 2005
"How to Host a Pervasive Game Supporting Face-to-Face Interactions in Live-Action Roleplaying--by Jay Schneider, Gerd Kortuem (Ubicomp 2001)--describes an ubiquitous computing gaming environment that supports live-action roleplaying. The point of this is to enhance liveaction games and thave "a testing ground for our sociability enhancing mobile ad-hoc network applications." The game they present is called Pervasive Clue, it’s a "live-action roleplaying game based loosely on Hasbro’s classic board game Clue augmented with short-range radio frequency (RF) PDA devices."
The goal of Pervasive Clue is to discover who killed the host, Mr. Bauer, where it was done and what was the murder weapon. Solving the murder is done through the discovery of clues, when a player feels they can solve the crime they are allowed to make an accusation. If any of the crime facts (murderer, location or weapon) are incorrect the player is eliminated. [via pasta and vinegar]
September 02, 2005
Interface #4 / TFT tennis V180
In Interface #4 / tennis V180 two people play tennis using comtuper screens as rackets. The installation bridges the gap between an actual game of tennis and popular computer games like "Pong." You have to place your screen like a racket in front of the approaching virtual 3D ball. You hold the screen in your hands and move it from left to right or flip the sides The flip rotation determines the angle and thus the trajectory of the ball after impact on the screen. The position of the steel arms and the "flip" rotation between the screen and the arm influence the perspective of the 3D field which each player sees while playing the game.
On the four rotating points of the installation sensors are placed which detect movement and determine the status and position of the installation. Quite addictive. There were people queuing to try the game and it was hard to give my place to someone else. By Dutch artist Dirck Eijsbouts. Images. [blogged by Regine on we-make-money-not]
September 01, 2005
A Physical Score Table
In Scrapple, object placed on the table become sound-producing elements that determine the score's rhythm and pitch. They are interpreted as sound-producing marks in an active score: flexible shape-holding curves allow for the creation of melodies, a group of small toys and other moving objects yield ever-changing rhythms. Video projections on the table turn the installation into a simple augmented reality, in which the objects are elaborated through colorful and explanatory graphics. [blogged by Regine on we-make-money-not]
Action Driven Narrative
Jumping Rope is the classic children's game of jump rope transferred into the digital world. Your skill at jumping an invisible rope determines the course of a film narrative. You have to stand between two projection surfaces that each display a different figure holding a jumping rope. You can see on the screen how they are turning the rope but you have to mentally see the rope in order to jump on time. Your talent at jumping is evaluated by a video camera equipped with motion-tracking technology.
If you're good at it, the characters are encouraging and play with you. There are three different scenarios played by three different couples. The installation offers a simulated life situation in which you, as an adult, suddenly realise that your behaviour is pruned through a haunting invisible rope by the people around you (your parents, colleagues, boss, etc.), that you might be "trapped" between two characters or still be dancing to someone else's tune.
In Jumping Rope, the participant becomes a part of the cinematic situation and their jumping enables the scenario to progress and the plot to develop. A work by Portugaly Orna, Talithman Daphna, Younger Sharon (who also made Heartbeats). Video. [blogged by Regine on we-make-money-not]
Virtual Gaming's Elusive Exchange Rates
"MMORPG assets are being traded for real-world money. Are these virtual economies giving gamers their fair share?...
...While it may be hard today to arrive at accurate exchange rate data, many think this dynamic will change over time, mostly because the overall value of the market will one day be too big for real-world governments to ignore.
"Eventually, there's going to be a portfolio of these synthetic currencies," said Castronova. "Cyberspace nations that are issuing these currencies are going to be under legal obligation to report sales and volumes and transactions, because in worlds where those currencies can be freely liquidated into dollars, there are clear tax implications..." From Spot On: Virtual Gaming's Elusive Exchange Rates by By Daniel Terdiman, News.com.
August 30, 2005
Virtual Constructs Real Space
"Installation is a system designed by Simon Greenwold at the Aesthetics and Computation Group at the MIT Media Lab, consisting of a viewing window and a stylus with which users can create virtual forms and install them permanently into a real space. By tracking the position and orientation of the stylus and the window itself, we are able to calibrate virtual coordinates with real viewing position. Virtual objects created in the system respond as though they were physically in the space of the room. Once objects are placed in the envrionment, they will stay there in perpetuity, changing and growing like plants in the air." View demo. [via]
August 27, 2005
Manuscape: An Instrument for Non-Linear Storytelling
"I have just got back from Helsinki where I saw Pia Tikka's interactive film Obsession at Kiasma Art Museum. This is a four screen work of immense complexity, working with an automated rules-based system of clip selection, influenced by an audience's physiological reactions to what is seen on screen. Swivelling chairs allow the programme to compile data on viewers's gaze direction, while an interactive handprint on the chair arm picks up subtle alterations in skin conductivity and pulse rate and augments the presentation of image according to arousal states. The subject of the film is sexual obsession and attitudes to rape-conflating female desires and fantasies with a semi-documentary narrative. This is an innovative work, using aspects of audience mobility and biology to alter content." [blogged by Martin Rieser on Mobile Audience]
Obsession (2003-), Finland
Project URL: http://crucible.uiah.fi/crunet.nsf/etofilmpages/obsessio
Actor: Media Lab / University of Art and Design
People: Pia Tikka, Rasmus Vuori
Genre: interactive film/ installation and research project
Obsession is an interactive film/ installation and research project by film director Pia Tikka which uses motion sensors, cinematic metadata and generative algorithms. The participant has control over the drama flow in a manner that respects the affective nature of the cinematic media, where the participant may even lose conscious control over the flow of the events. Both conscious actions and unconscious sensorimotor reactions of the participant‘s mind/body can have effect on the narrative. Analyzed images are assigned a position in the narrative story-world map, i.e. the manuscape. The manuscape works as an instrument for non-linear storytelling and allows development of a range of interactive multimedia applications. Interaction and software design by Rasmus Vuori.
From the project web site:
Obsession is an attempt to describe, how the traumatic acts of violence not only affect the individual, but how his or her family and the life-environment are also violated. The narrative level of Obsession, in regard to its subject matter of "sex and violence" in our representational culture, brings this important subject matter to the focus of discussion. Inside the four walls of a self-service launderette, young Emmi and an entrant stranger, Henrik, measure embodied distance. From their interaction, a loaded emotional situation emerges, framed by a trauma that Emmi and Emmi’s mother Jatta share. The distorted mind of Emmi creates an endlessly wandering path of narrative potentialities and interpretations.
The computational core of Obsession is a narrative engine, based on content metadata. Inspired by the Russian film theorist Sergei Eisenstein (1898-1948), the”montage-machine” of Obsession maps spectators’ psycho-physiological states to cinematic elements. Obsession is a kind of enactive cinema: How the narrative unfolds, and how rhythm and soundscape are formed, depend on how the spectator lives by the emotional dynamics between the characters.
August 26, 2005
Residual Memory Immersed, Materialized in a Real Environment
The Residual Data-Cloud--by Diogo Terroso--is an application that loads images from a networked source and generates a data-driven three-dimensional form. Images are collected via a digital camera, or a mobile phone, by the author and participants during presentation. The resulting shape, which resembles a cloud of dust, is a metaphor of residual memory immersed and somehow materialized in a real environment.
Digital appears here as a parallel dimension, in which user’s perception is subjected to layers of abstraction and figuration. Its behaviour in the real space, captured by a tracking device, affects data display by revealing different properties of the cloud. Recognizable shapes appear and disappear through interaction. Movie.
August 25, 2005
StarChild + FlatWorld
Data sonification has been viewed as a tool with great potential for studying complex sets of scientific data. Sonification can provide the researcher with the ability to perceive variations and trends that are invisible to more established data analysis techniques. However, few have explored the potential artistic applications of sonification. In 1996, Jarrell Pair worked with Alec Robinson to se Matlab and Csound to prototype software to transduce data from various sources such as images, temperature, and light intensity into aesthetically pleasing audio. We used this work to develop the audio effects for StarChild. Using Csound and a custom C program, astronomical data from the Shoemaker Levy-9 comet collision was used as input to create audio for portions of StarChild. Additionally, images of the collisions with Jupiter were transduced into audio effects using Hyperupic, an application running on a NeXT computer. 440k MP3 file (low sample rate) taken from the sample files created from the comet collision.
From July 1995 through June 1996, Pair was extensively involved in the technical development of StarChild, a multimedia opera. The opera was composed and produced by Audio Research Team director, James Oliverio. Alec Robinson and Pair created sound effects for the opera using data sonification methods we had developed as part of an ongoing team project. He was also involved in the installation, testing and evaluation of the eight channel audio steering system used in the opera.
The StarChild production team included visual futurist Syd Mead (designer for the films Blade Runner, Aliens, Tron, and Star Trek: The Motion Picture ), the internationally recognized lighting designer Lloyd Sobel, animator Steve Walker, and scientists and engineers from across the Georgia Tech Campus. Students met and worked with the guest artists in workshops, lectures, and in the production of the opera itself. An internet MBone broadcast of StarChild took place on June 5th, 1996. Two live performances followed on June 6th and 7th.
FlatWorld: The Mixed Reality Simulation Space
Since 2001, Pair has overseen the design and development of the FlatWorld project at the University of Southern California's Institute for Creative Technologies (ICT).
FlatWorld is a mixed reality simulation environment merging cinematic stagecraft techniques with immersive media technology. Current virtual environments have severe limitations that have restricted their use. For example, users are often required to wear bulky head mounted displays that restrict a person’s freedom to physically move as they would in the real world. Furthermore, a person cannot touch or feel objects in the virtual world.
This project addresses these issues by developing an approach to virtual reality simulation which allows individuals to walk and run freely among simulated rooms, buildings, and streets.
August 22, 2005
MobZombies is a zombie-fleeing game where a player's movement controls an avatar in the game space. Players run away from virtual zombies by really running. You have to stay alive as a horde of the undead slowly moves towards you. The longer you survive, the more zombies appear and the better they get at following you.
Just because there are no obstacles in the virtual world doesn't mean that there aren't obstacles in the real world. Imagine running away from a zombie and realizing that the only way you can continue to evade it is by somehow negotiating a brick wall in the physical space or running through the football practice field during a scrimmage.
MobZombies is inspired by mobile games such as Botfighers or Mogi, where the player's movement in the physical world correlates to the game space. Because we carry mobile devices with us everywhere, it becomes fun to think about a version of MobZombies that kicks in at random times during the day, forcing you to stop whatever you were doing and try as hard as you can to avoid the undead.
August 19, 2005
Real World Activities Effect Your Game Character
G-Link by Ross O'Shea is a student project where your real world activities effect your game character. This hip pouch and badge is full of electronics and as you wear it throughout the day, it senses and records data; how many steps you have taken, how much you speak, size, temperature and brightness of your environment.
This device has currently been prototyped using the RPG Morrowind. In usual circumstances, a player might make an avatar that is a fitter, stronger more attractive version of themselves. The G-Link reverses this and says if you sit in and play games all day, your character will be weaker, yet if you go out for a walk then your character will be stronger. Also if you go out in the sun your character will be light aligned, yet stay indoors and it will go over to the dark side. Recently part of the CoEDD graduate exhibition. [blogged by Chris on pixelsumo]
August 16, 2005
MAY YOU LIVE IN INTERESTING TIMES
Exploring Digital Technologies in the Context of Public/Shared Spheres
MAY YOU LIVE IN INTERESTING TIMES is Cardiff’s inaugural festival of creative technology - a three-day programme of events being held across the capital. The festival is being developed between Chapter and Bloc (Creative Technology Wales) and includes a two-day conference, new commissions, residencies, screenings, and artists’ projects in public sites across the city.
Artists are increasingly engaged with or inspired by digital technology - exploring consumer and communication technologies such as the worldwide web, mobile networks, file sharing, and computer gaming. Because digital technology is a participatory medium with global reach, artists tend to explore digital technology in the context of public and shared spheres. Often digital art is situated somewhere between public art and street culture where the technology itself is used as a ‘site’ for the production and presentation of art works. Although digital technology is often claimed to go beyond physical limitations, engagement with technology is always embedded in real spaces, whether this is explored from a user or network perspective.
Artists include: Blast Theory, Anri Sala, Grennan & Sperandio, TJ Wilcox, Jen Southern & Jen Hamilton, Scanner, Sarah Morris, Michelle Teran, Eddo Stern, Stefhan Caddick, Nina Pope & Karen Guthrie, Tim Davies, Rosalind Nashashibi, Tenant Spin, Andy Fung, Paddy Jolley, Mircea Cantor, STAR Radio, Valérie Jouve, Chris Evans, Mike Mills, Artstation, and many more.
Conference day 1: 28 October - Locative media and emplacement Speakers include: Prof Michael Corris, Head of Art & Photography, University of Newport; Claire Doherty, Director, Situations; Nina Pope, artist; Heath Bunting, artist; Giles Lane, Proboscis; Steve Benford, Professor in Collaborative Computing, University of Nottingham; Dr Sarah Green, Social Anthropologist, University of Manchester; Jen Southern & Jen Hamilton, artists.
Conference day 2: 29 October - Gaming Speakers include: Ju Row Farr, artist, Blast Theory; Stuart Nolan, researcher; Christopher Sperandio, artist; Eddo Stern, artist; David Surman, Lecturer in Computer Games Design, University of Newport; Alex Mayhew & Emma Westecott, Games Producers & Directors.
Conference Prices: £50 per day / £90 weekend ticket – organisations; £30 / £50 – early bird booking before 7 October £20 per day / £30 weekend ticket – individuals /concs; £15 / £25 – early bird booking before 7 Oct Party, The Point, Cardiff, 29 Oct, 8pm: Special Guests Scanner, Michelle Teran, Proober Glombat, Cymbient, Christopher Rees DJs.
The site will feature live streaming, artists’ projects, downloads, full biographies and images, conference booking and travel details, press section and the chance to receive regular updates on festival activity.
For further information about the programme and conference please contact: Gordon Dalton, Festival Coordinator: email@example.com 44 (0) 29 2031 1059 / 0779 234 1654
The festival is a Cardiff 2005 event and is presented with the support of: Cardiff County Council, Millennium Commission, the Arts Council of Wales, WDA, Cywaith Cymru . Artworks Wales, BBCi, Creative Mwldan, Millennium Stadium, Mute, G39, The Big Sleep, Elfen, Zenith Media, UWN, National Museums and Galleries of Wales, Sequence, Coolpants, Ping Wales, Oriel Mostyn and @Wales. http://www.bloc.org.uk/cgi-bin/showbig.cgi?id=55
August 15, 2005
SUIRIN, by Satou Tokuhisa and Masa Inagake, is an installation that in a sense demonstrates how Augmented Reality technology could help people relieve stress. The device looks relatively simple at a glance – a container with water and floating glass balls called “ukidama.” Just play with the water and the balls, then you'll be rewarded by calming, ambient light, sound and smoke. You might eventually get immersed in the feedback-control loop that may make you forget about your stressful day.
The key of this installation is the sophisticated sound augmentation mechanism using four microphones, FFT-based sound processing, and four surround speakers. What you hear from the speakers is something similar to suzumushi (bell-ring cricket)'s sound. This auditory experience is enhanced by the lighting, smoke and tactual sensation.
This is what I would call an Iyashi-kei installation. Iyashi-kei is a Japanese word meaning “a genre related to healing.” This word is relatively new but now very popular and used to describe everything including music, films, things, places, and people. Electroplankton is an Iyashi-kei game, Jennifer Aniston is an Iyashi-kei actress, and so on.
August 08, 2005
Mobile and Pervasive Electronic Gaming
After WiFi Bedouin and other mobile, location-based games, one of Julian Bleecker's current project along with Peter Brinson is called Vis-a-Vis. They’ve cobbled together a mobile gaming framework composed of light-weight, outdoor viewable TabletPCs, the Torque available source game engine, a GPS, a mil-spec 3D sensor that’s able to measure where one points the TabletPC. The plan with Vis-a-Vis Games is to take ordinary electronic gaming off the old fashioned video game console and desktop PC and moves it out into the real world. One of their current demos is an implementation of the Red Light, Green Light game. [blogged by fabian on 7.5th Floor]
August 05, 2005
Day Of The Figurines
A Single Day in the City
Day Of The Figurines--by Blast Theory and partners--is set in a model of a fictional city that is littered, dark and underpinned with steady decay. From the Gasometer to Product Barn, the Canal to the Rat Research Institute, up to 100 players roam the streets, defining themselves through their interactions. A gunship of arabic troops appears on the High Street. Scandinavian metallists play a gig at the Locarno that goes horribly wrong. How players respond to these events and to each other creates and sustains a community during the course of a single day in the city.
In order to play the game you will need to visit the model at Blast Theory's Studio. From Saturday 30th July - Thursday 18th August we invite you to Blast Theory's Studio in London to register yourself and your phone.
August 03, 2005
Spatial Augmented Reality
The goal of Augmented Reality (AR) is to insert computer-generated virtual objects in the real world and the challenge is in creating an illusion of consistency between the real and the virtual environments. Traditional AR approaches involve head-mounted, eye-worn or hand-held displays. But we can draw parallels between the displays techniques used for virtual reality (VR) and AR, and speculate about the alternative approaches for AR.
In this talk--Friday, August 5th, 10am-12pm--Ramesh Raskar will discuss new practical alternatives using spatially augmented displays. The spatially augmented reality (SAR) approach exploits video projectors, cameras, radio frequency tags such as RFID, large optical elements, holograms and tracking technologies. The underlying techniques in SAR overcome some of the annoyances of the eye-worn AR in authoring, identification and image registration. He will discuss enabling techniques and describe our experience with applications in industrial maintanance, entertainment, art, education and various forms of human computer interactions.
Ramesh Raskar joined MERL as a Research Scientist in 2000 after his doctoral research at U. of North Carolina at Chapel Hill, where he developed a framework for projector based displays. His work spans a range of topics in computer vision and graphics including projective geometry, non-photorealistic rendering and intelligent user interfaces. Current projects include composite RFID (RFIG), multi-flash non-photorealistic camera for depth edge detection, locale-aware mobile projectors, high dynamic range video, image fusion for context enhancement and quadric transfer methods for multi-projector curved screen displays.
Dr. Raskar received the TR100 Award, Technology Review's 100 Top Young Innovators Under 35, 2004, Global Indus Technovator Award 2003, instituted at MIT to recognize the top 20 Indian technology innovators on the globe, Mitsubishi Electric Valuable Invention Award 2004 and Mitsubishi Electric Information Technology R&D Award 2003. He is a member of the ACM and IEEE. [via]
July 29, 2005
Rigid and Soft Bodies for Realistic Haptic Interaction
Virtual Canoe is a real-time water simulator with a database of 3D fluid dynamics. The system introduces realistic and interactive water into virtual worlds with haptic characteristics that enhance interaction.
The system combines a pre-computed database and a real-time simulation to establish a realistic waveform. The real-time component is a wave-function simulator that transmits water pressure and velocity. The database provides a force pattern for a point on an object's surface at a constant input velocity. The pressure from the database effects the velocity of the water in the real-time wave simulation, which keys the query to the database. This enables the system to create a realistic waveform. It carefully decomposes the original Navier-Stokes equation into a simple real-time function and a complex pre-computed function considering the area and the phenomena. Video. Virtual Canoe will be presented at the upcoming Siggraph conference. [blogged by Regine on we-make-money-not]
July 26, 2005
You wake up in the morning and sit at your breakfast table. There comes a can of Earl Grey, magically sliding towards you. But, who's suggesting to have a cup of tea? A group of students at Tokyo Institute of Technology created an installation called "Kobito: Virtual Brownies," which allows us to see and feel who it is: kobitos (dwarves).
The system was developed using Computer Vision, Artificial Intelligence and Haptic Feedback technologies. The dwarves appear in a movable display device called Kobito Window (see below) that blends video images and virtual 3D avatars of dwarves. Kobito Window is like a magic lens through which people can view the world differently from the way our naked eyes can see: the device recognizes its orientation using a rotary encoder and displays 3D avatars based on a current view angle. Moreover, you can feel the dwarves through haptic feedback. If you hold the tea can, you can feel the force of the dwarves. If you push the can against the dwarves, they will react to it.
This mpeg movie shows the overview of the system with English captions.
The camera mounted on the ceiling tracks the position and the orientation of the tea can. Based on this tracking technology, the software application determines how 3D avatars act. The system presents the 3D avatars visually (in Kobito Window) and physically (on the table as movements of the tea can). The tea can is controlled by a mechanical component hidden under the table (see the above photo). This component, whose position can be programmatically controlled, integrates a magnet for moving the metal can without physical contact.
People who created this installation think systems like this "can be used in the fields of design, amusement, and healthcare."
Related Website: Robot Technology Research Forum at Tokyo Institute of Technology (in Japanese) [blogged by manekineko at we-make-money-not]
June 20, 2005
Large-Scale, Real-Time Motion Capture
Squidball is a large-scale, real-time interactive video game that uses motion capture technology to create a unique and energetic gaming experience. In the world’s largest calibrated motion capture volume, the game debuted on August 12th, 2004, with an audience of 4,000 people, at the Los Angeles Convention Center, as pre-show entertainment for the SIGGRAPH Electronic Theater. SIGGRAPH is the Association of Computing Machinery’s Special Interest Group in Graphics.
Squidball is played with several (12 at SIGGRAPH) large (approx. 4ft in diameter), helium-filled weather balloons in retroreflective jackets – these are the input devices for the game. By throwing, batting and bouncing the balloons throughout the playing field (which, in this case, was approximately 400x200x40 cubic ft), the audience plays to eliminate the on-screen targets, which are represented on a giant 40-foot video screen that also displays scoring and timers. As a result of this unusual motion capture application, the audience sees the balloons move across the two-dimensional video projection screen as they simultaneously interact with them in the 3-D space. Winning the game inspires and requires collective cooperation among the entire audience.
June 14, 2005
Living Book of the Senses
In Living Book of the Senses--by Diane Gromala--users are able to see their physical surroundings while dynamically engaging with three-dimensional mixed realities which appear on their headsets. Users can interact with the book in dynamic ways. They can ask the book questions (via voice recognition), and can influence the book through their sensory (bio) feedback. Users wear a headset/head-tracker/color camera system that enables them to see physical reality enhanced with a virtual reality overlay. The camera inputs images/patterns and feeds them back into the ARToolkit software which then displays digital information associated with the physical markers onto the headset. The ARToolkit can calculate camera position and orientation relative to physical markers in real time for video-mediated reality.
Each reader can view AR scenes from their own visual perspective. Users can fly into the immersive world and see each other represented as avatars in the same virtual scene. Readers remaining in the AR scene have a birds'-eye view of other readers as miniature avatars in the virtual scene displayed through their headset. User-controlled dialog with the book elicits responses/answers from the book (expressed in digital data: visual, textual, auditory). As the users simultaneously interact with the book in the physical and virtual realms, the book responds to individual and multiple physical states (via biofeedback) to express resulting changes in narrative. The narrative is a cultural history of the senses. [Read Extensive Bodies, interview by Yvonne Volkart with media artist and theorist Jill Scott].
June 10, 2005
Harnessing Cameras "In the Wild"
RealityFlythrough is a telepresence/tele-reality system that works in the dynamic, uncalibrated environments typically associated with ubiquitous computing. By opportunistically harnessing networked mobile video cameras, it allows a user to remotely and immersively explore a physical space. Live 2d video feeds are situated in a 3d representation of the world. Rather than try to achieve photorealism at every point in space, we instead focus on providing the user with a sense of how the video streams relate to one another spatially. By providing cues in the form of dynamic transitions, we can approximate photorealistic telepresence while harnessing cameras “in the wild.” [via]
June 01, 2005
View-Dependent Stereoscopic Projection in Real Environments
No More White Screens
This new technique in steroscopic projection callibrates each pixel that is projected onto non-planar textured and colored surfaces for view-dependant stereoscopic visualization. The ad-hoc projection opens up the potential for augmented reality environements in virtually any space, removing the need for a dedicated white projection screen or CAVEs. Imagine projecting a virtual corridor on a brick wall that maintains its illusion by removing any distortions caused by texture or color and by visualizing the depth of the corridor stereoscopically as the viewer changes position. [demo video] [Siggraph 2005] [blogged by cw wang on Future Feeder]
May 31, 2005
Environment-Sensing Adventures in the City
Neighbourhood Satellites is one of my favourite projects. Ever. I've spammed Myriel Milicevic with messages asking her if she'd send me pictures, when she'll have something online, etc. So far, there's only the PDF of her brilliant thesis project. Proper website is coming soon.
Neighbourhood Satellites are handheld sensing devices that monitor people's local environment. In its current prototype form, the satellite is able to sense air quality, light, and the presence of cell phone signals.
The data collected is presented in three different modes:
- In "status" mode, it displays the current conditions.
- In "game" mode, the satellite leads a parallel existence inside a video screen, navigating amongst the offending pollutants in need of analysis.
- In "map" mode, the system receives data from the other "satellites" carried by people in the area, and displays on a map, their location and contamination level. This mode could generate some interesting behaviours: A group of players, gathering in larger numbers at polluted street junctions, navigating their satellites through thick air, might just as well be interpreted as a silent protest.
The way the satellite is tilted by the hand controls its navigation on the screen. So the satellite’s position in the hand is just as relevant to the game as the position of the player inside a space.
The satellite also communicates through tangible behaviour. For example, when approaching air pollutants onscreen, the satellite inside the game will suck in air to capture specimen. Simultaneously, the physical satellite emits streams of air that the player can feel in his hand. A gentle vibration accompanies the sensation.
Note to Myriel: Hope you're not mad at me because I didn't wait for the website...
May 20, 2005
SPACE AND PERCEPTION
LIVE STREAM OF MIXED REALITY SYMPOSIUM
SPACE AND PERCEPTION will focus on new phenomenon of Mixed Reality --the environment generated by new technologies that contain significant interaction possibilities in both virtual and physical spaces. Identifying the common in these different views on realities and perception of space, the aim of the symposium discussions is to contextualise and to set up the conceptual background for the development of emerging field of Mixed Reality. It also intends to activate the collaborative potential of the sciences, art, technology and other creative fields of contemporary society.
May 05, 2005
Full-Body Interfaces for Linking People
CINE--by Miro Kirov, Houston Riley, and James Tunick--is a new type of networked computing environment that exists in an immersive dynamic space instead of in a PC box. Its displays fill entire walls instead of being confined to small, isolating screens. It is our belief that the mouse and 2D desktop are dated and inadequate. CINE's full-body interfaces will allow richer modes of expression and creative 3D data organization will engage users, inspiring a sense of magic while also making information retrieval and collaboration more efficient.
Similar to the futuristic vision of the *Holodeck, CINE is envisioned as an immersive visualization platform and advanced collaboration space equipped with an intuitive multi-user gesture interface. CINE, however, is not merely an illusionary virtual space, it is a mixed-reality space that augments group experiences by linking people in virtual spaces to people in real spaces. [via ColumnNetwork]
May 03, 2005
SPACE AND PERCEPTION
International Symposium on Mixed Reality
SPACE AND PERCEPTION: International Symposium on Mixed Reality May 20-21, 2005, Riga, Latvia. The symposium will focus on new phenomenon of Mixed Reality--the environment generated by new technologies that contain significant interaction possibilities in both virtual and physical spaces.
As we still have a limited number (of 3-5 spots) for artists and researchers (working in field related to Mixed Reality issues) to participate in symposium - we ask for an informal application -- please enclose short statement describing your field of research or project you would like to present. Deadline: May 9, 2005. Please send your submissions to rixc[at]rixc.lv (applications from Nordic-Baltic region, West-Russia/Belarus are particularly welcome).
The symposium will focus on the following themes:
MR + digital physics (computable space)
MR + navigation (transformation between real and virtual space maps)
MR + social interaction (public space in physical and virtual environments)
MR + sonification (sound architecture in interactive environments)
MR + game studies (industry vs. gamers)
MR + mobile media (role of users in future applications)
Throughout the existence of the humankind the understanding of space, its frontiers and dimensions has changed a number of times. During the last century the newest scientific theories and the researches in the structures of micro (atom and quantum) and macro (the universe) seek to expand the boundaries of the visible space. As the result of the development of information and communication technologies the new frontier-free space is opened up -- the virtual space of the electronic media.
At the beginning of the 21st century, when mobile media and wireless networks come into dominance - after surfing in the virtual world the navigation in physical space acquired importance. However nowadays in present reality we unavoidably interfere with pervasive media, daily dividing the attention between the events in virtual (media) and physical (real) space. The merger of the digital and physical space alters not only our perception of space but also the sense of the reality...
Artists, scientists, media researchers and technology experts will present their creative discoveries, newest ideas, theories and researches that have taken place in the fields related to space and perception.
Identifying the common in these different views on realities and perception of space, the aim of the symposium discussions is to contextualise and to set up the conceptual background for the development of emerging field of Mixed Reality. It also intends to activate the collaborative potential of the sciences, art, technology and other creative fields of contemporary society.
* Preliminary list of participants:
Maja Kuzmanovic / TRG project / FoAM/Belgium // Robert van Kranenburg / Amsterdam / Virtual Platform // Yon Visell / Croatia - (physicist) / Zero-Th Studio // Karmen Franinovic / Croatia - (architect/designer) / Zero-Th Studio // Alkan Chipperfield / Australia (anthropologist) // Geska Helena Andersson - The Trans-Reality Game Lab (studio director) / Interactive Institute / Gotland University // Sha Xin Wei / Canada - (media researcher and techno-scientist) // Juris Zhagars / Latvia - (scientist, radioastronomer) / VIRAC institute - tbc. // Nicholas Gaffney / FoAM/Belgium // Steven Pickles / FoAM/Belgium // Evelina Kusaite / FoAM/Belgium // Tim Boykett (artist) (Austria) / Time's Up // Andreas Mayrhof (artist) (Austria), Time's Up // Brigitta Zics (artist) / Germany // Perttu Hämäläinen (research scientist) / Finland / Telecommunications software and multimedia laboratory / Helsinki University of Technology // Normunds Kozlovs (sociologist) / Latvia and others...
The conceptual background for the symposium is developed in collaboration with EU Culture 2000 supported project "TRG - Trans Reality Generator" project team: FOAM (Belgium), TIMES UP (Austria), and KIBLA (Slovenia).
Developing the framework for this event are also previous RIXC research projects and networked activities on exploring fields of media architecture, acoustic space and locative media.
* Organisers and support:
The symposium is organised by the RIXC Center for New Media Culture (Latvia). Supported by the Latvian Ministry of Culture, Nordic Cultural Fund and Latvian Cultural Capital Foundation.
* Contact: rixc[at]rixc.lv or rasa[at]rixc.lv (Rasa Smite)
April 27, 2005
"ABSTRACT: In augmented reality (AR) environments, users experience the physical environment and other users directly along with the mediated virtual objects embedded in the environment. In immersive virtual reality (VR), the user's experience of a visual environment (and sometimes other senses) is completely mediated. The representation of the user's body in virtual environments granted us new research territory in dualistic interaction between the mind and body: how do the virtual body and the user's mind interact (with) one another and eventually effect the user's behaviors in the envrionment? An experiment was conducted to explore the potential effect of users' and interactant's bodies to sense of presence in VR and AR environments. Results from the study suggest that the absence of representation of the user's body in VR environments may lessen (her) sense of spatial presence compared with AR environments." From Comparing Differences in Presence during Social Interaction in Augmented Reality versus Virtual Reality Environments: An Exploratory Study by Tang, A., Biocca, F., and Lim, L. (2004).
April 25, 2005
The Walking Experience
lifeClipper is an open air art project. It offers an audiovisual walking experience in a virtually enhanced reality. Technically it is based on portable computer equipment worn by an individual. When walking around in a chosen culturally interesting area or impressive landscape the visitor's position and viewing direction is measured by means of GPS and the found situation augmented according to defined presets. Image and sound are displayed on an HMD (Head Mounted Display). Live captured image and sound are treated in real time by altering parameters as well as by adding music (composition, spoken text and sampling of documetary material), photo and video material (documentary and fictively arranged).
Through interventions on habitual ways of listening and seeing, reality becomes questioned and day-to-day situations become an adventure. Users feel as though they are watching a film in which they participate as active observers and in which they get attention from virtual players. The borders between subjective and objective perception become blurred as the user is immersed into space and action but also contemplates artistic compositions and cultural reflections. Read a review.
April 06, 2005
'Ere Be Dragons
'Ere Be Dragons, by Nottingham-based Active Ingredient, is the prototype of a game for PDAs that takes you into a virtual world created by your own heartbeat. As players travel through the landscape the physical and physiological changes in their body help to create a different world. The game responds to the location and physical activity of the players, measured by pedometers and heart-rate monitors.
At the start of the game you set your ideal heart rate range, go above or below this range and the world will begin to fade, only walking and maintaining your heart rate at a steady pace can keep the world alive. Throughout the game there is the sense of being chased. When you do well, the landscape blossoms, if you do badly then a mist appears and the landscape darkens. 'Ere Be Dragons aims to encourage people to exercise by walking and to learn more about their body. More in BBC Nottingham. [blogged by Regine on near near future]
March 31, 2005
Can You See Me Now in April
Blast Theory performs again their first hybrid reality game
Can You See Me Now?, the hybrid reality game developed by U.K.'s group Blast Theory and Nottingham University's Mixed Reality Lab will be perfomed twice this upcoming month of April in Cambridge, U.K. People from all over the world are able to play online on the virtual streets of Cambridge against Blast Theory's runners on the real city streets. To participate, you need to log-in to the game's website (Can You See Me Now) on April 1-3 from 2-5pm (GMT) or on April 6-8 from 4-7pm (GMT). A more extensive explanation of the game can be found at the same website.
Tool for Enhancing Relationships Between Specific Groups and New Urban Spaces
SCOOT is a mixed reality experience designed by Deb Polson and Marcos Caceres, to explore the potentials of location-based games. Players have to solve clues located both in the real world and the virtual world. They interact with strange objects, receive information via SMS to their phones and have to text their answers to the games clues back to SCOOT.
As part of their journey around the site players will be introduced to the area and its surrounds. This orientation will not only be spatial, as the game explores also the history of the place and its dynamics. [via near near future]
March 30, 2005
Shaping Networked Realities
ABSTRACT: The paper introduces a networked multi-user installation and interactive mixed reality environment that combines the fields of interactive art, telecommunication and streaming technologies. Phonic Frequencies is an audiovisual data space whose appearance can be altered via networked communication devices. Visitors are active agents and participate in shaping spaces as they control audiovisual data with their telephones.
Linking the physical space to the digital network space, Phonic Frequencies opens a hybrid reality for distributed sound exchange and visual communication, blurring the boundaries between verbal communication and digital information. The work attempts to break the conventional communication cycle of bidirectional dialogue and to consider the meaning of in- and output from a different perspective." From Phonic Frequencies: Shaping Networked Realities by Tamas Szakal, Christoph Groenegress, Wolfgang Strauss, Predrag Peranovic (c) 2003
March 25, 2005
Pervasive Game Development Today
"...Pervasive gaming was first the vision of Swedish company It's Alive!, meaning location-based games that surround you, 24 hours a day, everywhere. When you walk down the street, you're walking through an adventure world draped on top of the real world, and people you meet may be characters in the same game you're playing. Pervasive games are built upon three core technologies: mobile devices, wireless communication, and sensing technologies that capture players’ contexts. It is actually the blend of technologies combined with the location-based and often public nature of game play, gives pervasive games their distinctive identity [Bridging the Physical and Digital in Pervasive Gaming]." From Pervasive Game Development Today by Fabien Girardin.
The Matrix Online
Actors Hired to Live In-World
Since the close of the beta, Warner Bros. Interactive Entertainment announced that it has employed a troupe of 20-odd people whose job it will be to enact narrative scenarios in The Matrix Online live. These people will assume the roles of popular characters, interact with players, and generally move the stories in ways that only live "actors" can. And though it appears that the story hasn't officially commenced, a few players on the Method server were treated to a pretty slick sample of it this afternoon: an extended pep-talk by none other than Morpheus himself." [via Wonderland]
March 24, 2005
Blast Theory presents Can You See Me Now?
Join the Hunt
Played online and in the streets, BAFTA-nominated Blast Theory challenge you to a game of cat and mouse around a virtual map of Cambridge. Log on using the public terminals situated in the Junction's new café bar or play online at canyouseemenow and be dropped in at a random location from where you must avoid capture by the Blast Theory 'runners'.
Eavesdrop on your pursuers' conversations and swap tactics with other players as Blast Theory (as real actors positioned on the real streets of Cambridge) hunt down your virtual presence with the aid of handheld computers and GPS technology. FREE to play. April 1st-3rd 2-5pm; April 6th-8th April 4-7pm at The Junction, Cambridge.
Artists' Workshop: April 6th 1-4pm; £5/£4 concessions
The player is a key feature in all Blast Theory's work. This afternoon workshop will look at how the audience engages directly with an artwork, in dialogue with other 'players' and the artists.
Using the player as a focus, participants will respond creatively to site/space, game structure and forms of media/communication technologies to inform and stimulate their own media and performance practice. Finally all participants are welcome behind the scenes at 4pm to look at the hardware, software and performance preparation of 'Can You See Me Now?' Blast Theory's award-winning interactive game, presented by the Junction.
Suitable for artists looking to expand their understanding on new performance technologies, participants should come prepared to move, draw and talk.
To book a place call the Junction box office on 01223 511511 or visit in person at The Junction, 2 Clifton Way, Cambridge, CB1 7GX. [via DAN]
March 10, 2005
Organum: The Game
Anticipated Adaptations and Evolutions
Organum: The Game is being developed by a six-person team led by Greg Niemeyer and Chris Chafe; in the multiplayer game, users conspire with other organs that have been freed from their respective bodies, controlling game play through a collaborative vocal-input interface. The game is designed to encourage highly cooperative performance that combines real-world, embodied performance with virtual play.
Additionally, players' success in the virtual world of the game will hinge upon their willingness to register as real-life organ donors. We are conducting various organ donor drives in association with the production of all three Organum media projects." [via Jane McGonical's web site]
March 03, 2005
Build A Representation of Brussels
The CITYSNAPPER_Game--by LAb[au] and iMAGE z[ONE]/Olivier Vanderaa--is an urban interactive photographic game in real time, using an online 3D electronic space and SMS server technology, relating the urban space with the World Wide Web, and users (players) with an urban photographer. Players use SMS to instruct the photographer to go to a particular part of town and bring back a picture of it; orientate the shooting with additional verbal instructions (word, sentence, riddle); and record their response. The photographer takes a picture with a cell phone and implements the interface via SMS. The audience can follow the game through the online 3D interface from a home computer or by navigating the 3D model of the city screened in the exhibition space. At the players can retrieve a print of the image "they gave birth to."
February 28, 2005
RESONANCES AND EVERYDAY LIFE:
Ubiquitous Computing and the City
Abstract: Ubiquitous computing seeks to embed computers into our everyday lives in such ways as to render them invisible and allow them to be taken for granted, and social and cultural theories of everyday life have always been interested in rendering the invisible visible and exposing the mundane. Despite these related concerns, social and cultural studies remain in the background of discussions of ubiquitous technology design. This essay seeks to introduce researchers in both fields to each other, and begin to explore the ways in which collaboration might proceed. By exploring mobile and ubiquitous technologies currently being used to augment our experiences of the city, this paper investigates notions of sociality, spatialisation and temporalisation as central to our experiences of everyday life, and therefore of interest to the design of ubiquitous computing." Resonances and Everyday Life: Ubiquitous Computing and the City (DRAFT) by Anne Galloway. [via]
February 22, 2005
Reflecting the Virtual Back Into the Physical
UTILITY WORKS seeks to acknowledge the importance of the mundane physical spaces where we play out our lives and provide new ways for the population to inscribe themselves into these informal public spaces. In an effort to draw new connections across our cities, UTILITY WORKS perverts existing municipal street furniture which will become points of mediation between disparate physical urban environments and the people that inhabit them. With these new urban experiences UTILITY WORKS seeks to provoke a renewed awareness of self and environment through a reexamination of familiar civic objects that we interact with every day. Just as a mirror reflects our bodies back to us, this project becomes a way of reflecting the virtual back into the physical urbanscape. Existing parking meters, post boxes, and garbage bins - the furniture of the mundane - will be activated to shape and articulate the informal public spaces of the city. Parking Meter | Post Box | Garbage Bin [via]
February 16, 2005
When Rooms Respond, 2
3 minutes2, by French art collective Electronic Shadow, recreates an extremely reduced living unit that can extend beyond its physical borders via the image. The space permanently reconfigures itself according to its inhabitant’s activities and also defines itself in time. The scenario presents in a few minutes the compression of most activities and functions taking place in the habitat and corresponding to its inhabitant daily life, eating, sleeping, working, etc.
The inhabitant himself is represented in the image as a silhouette and habitat is building itself around her/him as a cocoon. 3 minutes2 tries to draw the shape of a daily life modified by technologies and the presence of the virtual. Hybridization of real and virtual is fictively acquired and becomes the ground for the proposition of a habitat which anticipates the technological and social modifications making it possible.
No screens, no visible interfaces, the two characters touch the walls, make movements, the habitation responds to them. The technology has become totally invisible and the effect of technologic becomes then magic. [blogged by Regine on near near future]
February 15, 2005
Another (Virtual) Brick in the Wall
"...There are two types of audio tours: Linear where the story that's being told has a strict order and Random Access which uses RFID to tell what object a user is near. The AR tour, though, blends the best of both, a linear story with a random access system that lets users dig deeply into whatever fascinates them, giving them access to what feels like seemingly unlimited information...
..."My idea is let's ignore the goal of perfect registration between the virtual and physical world, which seems to be the goal of many AR researchers, and instead figure out what is needed to support the creation of usable, rather than perfectly registered, systems," says MacIntyre. MacIntyre sees AR as a new artistic medium that even the non-technically inclined can work with. To further that idea, his team developed DART -- The Designers Augmented Reality Toolkit -- a set of software tools for Macromedia Director that support the design and implementation of augmented reality experiences and applications..." From Augmented Reality: Another (Virtual) Brick in the Wall by Michelle Delio, Technology Review, February 15, 2005
January 31, 2005
Classic Tron, Distributed
GPS::Tron, by Thomas Winkler, combines GPS, Bluetooth and GMS or GPRS with the classic Tron game concept--mobile phone plus computer game.
The players' movements in real space, which are tracked by GPS and transmitted to the phone's display, influence his/her position in the game. Each player is represented by a line that gets longer and longer. But the player's own line is never allowed to cross itself or the opponent's line. Which makes the game harder as time passes. The goal is to drive your opponent into a corner so that he can no longer extend his line without breaking the rules and losing. This is a game for two players who can be geographically distant from one another. [posted by Regine on near near future]
January 27, 2005
Motivations for AR Gaming
T. Nilsen, S. Linton, J. Looser. Motivations for AR Gaming (.pdf). In Proceedings Fuse 04, New Zealand Game Developers Conference, Dunedin, New Zealand, 26-29 June 2004, pp 86-93.
In Augmented Reality (AR), interfaces consist of a blend of both real and virtual content. In this paper we examine existing gaming styles played in the real world or on computers. We discuss the strengths and weaknesses of these mediums within an informal model of gaming experience split into four aspects; physical, mental, social and emotional. We find that their strengths are mostly complementary, and argue that games built in AR can blend them to enhance existing game styles and open up new ones. To illustrate these ideas, we present our work on AR Worms, a re-implementation of the classic computer game Worms using Augmented Reality. We discuss how AR has enabled us to start exploring interfaces for gaming, and present informal observations of players at several demonstrations. Finally, we present some ideas for AR games in the area of strategy and role playing games. [via pasta and vinegar]
January 26, 2005
The Dream Project
Reinforcing the Disparity Between Virtuality and Physicality
In the teleconference performance The Dream Project--by Keith Roberson and Company in Space--performers wearing datasuits control virtual avatars in a shared virtual reality environment. Bridging the gap between multiple locations, these avatars interact with each other. Their interactions explore possibilities beyond the physical body. Choreography of these virtual bodies, based on Quantum and Super-string theory, reinforces the disparity between virtuality and physicality.
January 25, 2005
Mobile Multimedia as Mixed Reality
GoingPublik is a sound art work for distributive ensemble and mobile score synthesis. The core idea behind the work is a strategy of mobility by employing a wearable computer system running a software based electronic scoring system. GoingPublik will be performed at Interactive Futures 05 on February 5 from 9:30-10:30 pm. Art Clay will present a lecture "GoingPublik: Mobile Multimedia as Mixed Reality" from 2:30-3:30 PM.
"The core idea in the project is a strategy of mobility and this is accomplished employing a wearable computer system running the software based electronic scoring system as its central element. The program itself basically allows for what might be termed 'composed improvisation' which permits improvisational elements within a compositional structure. This is accomplished by electronically monitoring the performer's physical behaviour during performance. The program then responds by making suggestions to and even demands on the performer to various degrees and at various times.
Since each of the performers is equipped with the same electronic scoring system and because the system revolves around universally shared inputs such as geographical positions obtained via satellites and sensors using the earth's magnetic field, all have a common denominator and are thereby virtually linked. Despite the physical distribution of the performers in space, it is possible to have a commonly shared compositional palette and, at moments of close proximity between performers, to obtain instantaneous synchronized sonic elements. Both aspects needed for creating sonic structure within the work." Read paper >>
Composition Camouflaged: On the Relationship between Interpretation and Improvisation; Interview with Roland Dahinden, Günter Heinz & Thierry Madiot on the World Premiere of GoingPublik by Franziska Martinsen
January 12, 2005
Embodied interaction (and seduction) in games Technologies of Cooperation
Andrew Losowsky looked in an article in Guardian at the technology behind the success of Sony PlayStation2's EyeToy. Recently he followed this up in his weblog quoting Gonzalo Frasca, a researcher in computer games at the IT University of Copenhagen who emphasizes the impact of embodied interaction in games.
"With these games, players can use their bodies to communicate, to express themselves," says Gonzalo Frasca, "In addition to your play style, you also have your body language style. For example, some EyeToy players will try to make minimal movements, while others enjoy doing more grotesque ones. Lots of people also enjoy doing movements that are not functional to the game itself, like spinning or jumping, just because it's a cool thing to do and they are aware that they have an audience."
"There are two main reasons why it works so well," says Gonzalo Frasca, "The first and obvious reason is that it is extremely easy to learn and it involves a very natural interface: body movements. But the most important reason for its popularity is that it is also a fun game to watch. People make a lot of goofy movements while playing it, so it is very enjoyable for non-participants.
The transformed game structure has some interesting aspects that the original game does not have. In the transformed game, the players have included an element of conscious choice in the rules by including an action where they have to choose whether it is profitable to do calculations to make a piece. Making the right choice depends on cooperation and interaction between group members in the physical space...
The design of both hardware and software for a camera-based interface is a new challenge. Suddenly there is an importance not just on who is playing but exactly where they are doing it, and what the conditions are like in players' front rooms.
ARQuake is an Augmented Reality (AR) version of the popular Quake game. Augmented reality is the overlaying of computer generated information onto the real world. We use a head mounted display, mobile computer, head tracker, and GPS system to provide inputs to control the game. Using ARQuake, you can walk around in the real world and play Quake against virtual monsters...Our task at the moment is to take all the monsters and the guns etc. out of the quake game and to make them roam around a real environment. We also required the player of the game to move around the real world and have all the monsters appear as though they were standing next to and behind real buildings. Read ARQuake: An Outdoor/Indoor Augmented Reality First Person Application [PDF]; view videos.
December 29, 2004
Free Network Visible Network
Watching Information Flow
Spanish researchers Clara Boj and Diego Díaz are working on the Free Network Visible Network, an augmented reality project which aims to make visible the exchanged information between computers of a wireless network. First, marks in the facades of the buildings will indicate the presence of a node, thus the possibility of connecting to the net, and at the same time to see the 3D representation of the information that we are interchanging with this node in real time.
Free game tools will then be developed that use the location of the players in the physical space as main element of the action development. For example, players would be able to modify the information (message, images or videos) on the marks placed in the physical space and record messages for other users / players. Or they would be able to change the publicity images or the urban signals by using small marks and creating game strategies. Besides, the players will know in real time the position of other players if they are looking to a mark. On the basis of these simple applications, users will be able to develop their own games.
December 27, 2004
Emerging Infrastructures of All (Inter)net Research
Dr. Reinhold Grether's network research | netzwissenschaft site maps the "emerging infrastructures of all (inter)net research endeavours. net.science as an anthropology of connectivity is trying to overcome the constraints of specialist method transfers on net matters. the protuberance of technical networks necessitates a professionalization of human net knowledge. neither the isolation of concepts as in basic research nor the encapsulation of processes as in applied sciences will ever be able to adequately describe the complex autopoiesis of networks. net.science is undoubtedly developing into a scienza nuova of its own right."
December 17, 2004
Video game activism
(t)error, by Robert Praxmarer, is an interactive mixed reality arcade game where the player, physically represented in the game by his shape, is the interface to control the game. He has to jump and cower to avoid the fighter planes and tanks or submarines attacking him, and he also has to grab dollar signs and oil pumps etc. to get a good score.
He chooses his character in the beginning of the game, for example George W. Bush, the nations contributed in Iraq war, the nations of the axe of evil, etc. people and countries that you would connect to terror.
Via Neural. (Posted by Régine Debatty)
December 14, 2004
Where on-line meets on the streets:
experiences with mobile mixed reality games
"We describe two games in which online participants collaborated with mobile participants on the city streets. In the first, the players were online and professional performers were on the streets. The second reversed this relationship. Analysis of these experiences yields new insights into the nature of context. We show how context is more socially than technically constructed. We show how players exploited (and resolved conflicts between) multiple indications of context including GPS, GPS error, audio talk, ambient audio, timing, local knowledge and trust. We recommend not overly relying on GPS, extensively using audio, and extending interfaces to represent GPS error." From M.Flintham, R.Anastasi, S.Benford, T.Hummings, A.Crabtree, C.Greenhalgh, T.Rodden, N.Tandavanith, M.Adams, and J.Row-Farr (2003). Where on-line meets on the streets: experiences with mobile mixed reality games. In Proceedings of the conference on Human factors in computing systems, pages 569–576, Florida, USA. [via pasta and vinegar]
December 09, 2004
Summoned Voices--by Iain Mott and Marc Raszewski--acts as a living memory of people and place. It consists of a series of door installations each with an intercom, sound system and a computer that is networked to a central file and database server. The design metaphor of the door presents a familiar scenario, that of announcing oneself at a doorway and waiting for a response from persons unknown. Signage instructs the public to speak, make sounds or sing into the intercom. Their voice is stored and interpreted, and results in local playback composed of the individual's voice with those that have gone before. Summoned Voices acts as an interpreter of sound, a message board and an imprint of a community - a place for expression, reflection and surprise.
Summoned Voices is one of the 22 projects catalogued by Mirjam Struppek for her thesis Interactionfield - Public Space in the Digital Age (2002): "The public space is a field, which is created and becomes alive through various forms of interaction.
Thus interactive art with its life proximity, communicative issues and reflection of the power of the modern technologies is well suited to be linked with the urban public space. This information platform presents detailed some selected interactive media projects, which went consciously into the urban public space. The analysis, categorization and questionaire with the producers of these 22 examples show how new media can be alternatively used, in order to contribute to the rediscovery and reactivation of the urban space and to lead to new interaction qualities. In the open expandable list further projects are collected. With the time an archive can develop, with examples, that show new ways in dealing with the public space. The links section presents various conferences, research projects, articles and specialized artists that related to this topic."
Are cell phones new media?
re-narrating cities via nomadic technologies
"Multiuser environments in cyberspace have frequently been regarded as utopian spaces in which users could project their imagination. When communities are shaped in a hybrid space, mobile phones become new media tools for creating novel and unpredictable imaginary spaces, re-narrating cities. Fixed Internet users do not have the ability to move through physical space. But the emergence of nomadic interfaces represents a chance for such imaginary spaces to be enacted and constructed in physical space.
Nomadic technologies have a twofold role in the construction of playful/narrative spaces. First, they allow virtual spaces to be mobile, bringing them into the physical world. Second, when used to play games, they free the game from the game board or the computer screen, making it possible to use the city space as the game domain." from Are cell phones new media? by Adriana de Souza e Silva [Related]
December 03, 2004
"GPS, wireless location services, surveillance technologies, and other augmented space technologies all define data space – if not in practice than at least in their imagination - as a continuous field completely extending over and filling in all of physical space. Every point in space has a GPS coordinate which can be obtained using GPS receiver. Similarly, in the cellspace paradigm every point in physical space can be said to contain some information that can be retrieved using PDA or a similar device. With surveillance, while in practice video cameras, satellites, Echelon (the set of monitoring stations which are operated by the U.S. and are used to monitoring all kinds of electronic communications globally), and other technologies so far can only reach some regions and layers of data but not others, the ultimate goal of the modern surveillance paradigm is to able to observe every point at every time. To use the terms of Borges’s famous story, all these technologies want to make the map equal to the territory." From "The Poetics of Augmented Space: Learning from Prada" by Lev Manovich.
November 20, 2004
With Wings and Feathered Helmuts
Cockfight Arena (Eddo Stern with Mark Allen, Jessica Hutchins, Karen Lofgren): a one night parade of sweat and adrenaline hopes to reclaim performance art in the age of video games, pitting viewer against viewer in brutal virtual cockfighting theater. Audience volunteers suited up in custom-made wireless game controllers with full sized wings and feathered helmets. Combatants stepped into an arena to control their life size game avatars through vigorous flapping and pecking, competing for blood and birdfeed while rapaciously inflicting onscreen bodily harm in a custom made "joustlike" fighting game. [via]
Eddo Stern was born in Tel Aviv, Israel. His working interests are in structures of narrative and documentary, fantasies of history and technology, cross-cultural representation, and the phenomenological and cultural expanse of computer games. He works discriminately with a wide variety of media—computer games, electronic devices, software, video, sculpture and performance. His work has been shown internationally at new media and film festivals, museums, galleries, and game conventions. In 2000 he started «C-level,» a cooperative artist-run new media lab and art space in LA's Chinatown; currently lives near Los Angeles.
Source: http://www.medienkunstnetz.de/artist/stern/biography/, Florian Zeyfang, Interview with Eddo Stern, Tages Anzeiger, February 2002 and web at http://www.c-level.cc
November 19, 2004
Seamlessly Connecting Between Real + Virtual Worlds
CNN reports on a Human Pacman game, developed by Singapore based researchers, that can be played out in the streets or any real environment. The system was designed by the Mixed Reality Lab at the National University of Singapore. It includes Bluetooth, GPS, head-mounted displays and inertia sensors. Pacman can pick up power pills (in the form of Bluetooth enabled boxes in the environment used in the game) in his attempt to avoid pursuers. Read more at the New Scientist. (via)
Human Pacman has several novel aspects: Firstly, the players immerse in role-playing of the characters, physically moving around in a wide-area setting and performing tasks to reach their goals. Utilizing the computing power of wearable computers and the underlying network support, Human Pacman players experience seamless connections between real and virtual worlds as immersive first-person augmented reality view and full virtual reality view of the fantasy version of the game, Pacworld.
Secondly, Human Pacman also explores novel tangible aspects of human physical movement, senses and perception, both on the player's environment and on the interaction with the digital world. For example to devour the virtual "enemy", the player has to tap on the real physical enemy's shoulder, which is an instinctive action to "catch" the "enemy."
Thirdly, users enjoy unrestricted movement outdoors and indoors while maintaining their social contacts with each other. Players interact both face-to-face with other players when in proximity (physically) or indirectly via the wireless local area network (LAN).
November 17, 2004
TEKKEN TORTURE TOURNAMENT
During the Tekke Torture Tournament performance (by Eddo Stern, Mark Allen), 32 willing participants received bracing but non-lethal electrical shocks in correspondence to the injuries sustained by their onscreen avatars. Players wore shocking arm straps wired through a hardware/software hack of the world’s most popular fighting Playstation game TEKKEN 3. LINKS: Official c-level Tekken Tourture Tournament Page; TTT release form
Edo Stern was born in Tel Aviv, Israel. His working interests are in structures of narrative and documentary, fantasies of history and technology, cross-cultural representation, and the phenomenological and cultural expanse of computer games. He works discriminately with a wide variety of media—computer games, electronic devices, software, video, sculpture and performance. His work has been shown internationally at new media and film festivals, museums, galleries, and game conventions. In 2000 he started «C-level,» a cooperative artist-run new media lab and art space in LA's Chinatown; currently lives near Los Angeles.
Source: http://www.medienkunstnetz.de/artist/stern/biography/, Florian Zeyfang, Interview with Eddo Stern, Tages Anzeiger, February 2002 and web at http://www.c-level.cc and http://waco.c-level.cc (via)
November 16, 2004
Blast Theory in New York
Blazing a Trail in New York City
Downtown Digital Futures: New Technologies, Art, and the Urban Environment presents Blast Theory--Monday, November 22; 7 PM. Schimmel Center for the Arts at Pace University, 3 Spruce Street, New York.
Blast Theory is one of the most adventurous artists' groups working at the intersection of performance, media, and computer gaming. Lead by London-based artists Matt Adams, Ju Row Farr and Nick Tandavanitj, the six member-team uses video, computers, performance, installation, mobile and online technologies to confront a media saturated world in which popular culture rules. Blast Theory will discuss their recent work, an interactive performance/gaming event that fuses the worlds of on-line and off-line players.
In Summer- Fall, 2005, Blast Theory will participate in a two-month residency in New York City, in which they will create a new interactive multi-player game based in Lower Manhattan. The residency will culminate in a site-specific performance — local participants team up with on-line players in an evolving game that blurs the lines between physical and virtual.
Downtown Digital Futures is curated by Wayne Ashley for the Lower Manhattan Cultural Council. See the web site for information about 2005 events.
November 09, 2004
Location-based mobile phone games
IN-duce published a list for mobile phone games using GPS or cell towers signals, there's also a lists of other location based (or augmented reality) mobile games.
November 08, 2004
Alternative reality game for mobile phones
Vienen por ellas
"Vienen por ellas (They come for them)" is a Chilean ARG game (alternative reality game), created for Telefonica, that mixes the real world with the game world. Aliens are planning to conquer the Earth. They will capture all the women to fill the planet with "hybrid creatures." Users become part of an anti-alien organisation called Plan-EVA which tries to save the human race by solving quiz, answering questions, finding the clues, etc.
Users play via SMS, voice messages, Web sites, WAP, moblogs, MMS, ringtones, etc. For example, by calling the 321 (called "intercomunicador 321"), the player can listen to his present mission, get clues to solve the riddles, etc. Forums were also created for players to share and comment their experience. So far, the game is a success, with more than 300,000 users (mainly between 12 and 30 years old) registered. (via)
November 07, 2004
persistent virtual worlds at USCs Integrated Media Systems Center. Using mobile devices students walk through the USC campus leaving behind virtual fragments, represented by small 3d models, as they walk. These objects exist on a virtual world embedded upon a 3d model of the USC campus. Mobile users, using PDAs are able to create new objects, and peek inside the space they are helping to develop. As each object is dropped, it retains a set of unique behaviours, and will interact with other elements in the virtual world. The result will be an emerging, complex series of ecosystems that give life to the virtual space and propel users and viewers alike to engage with the world. more
Game and Theater departments in Utrecht (School for the Arts) collaborate on the project Flock, a virtual choreographic work. The project is inspired by computer-games: it's a virtual dance performance completely made on the computer. Flock is an interactive performance for a maximum of 10 players: a multi-player system. All players are connected to one computer (the server).
Flock is inspired on MultiAgent (MA) systems: coordinating intelligent behaviour among a collection of autonomous intelligent "agents", how they can coordinate their knowledge, goals, skills and plans jointly to take actions or to solve problems. Agents in a multi-agent system must share knowledge about problems and solutions.
Graig Reysnolds was the first who made a simulation of a flock: a school of fish or a flock of birds. How can a collection of individual be combined to create a group with a single behavioral momentum? A flock is a group of objects that exhibit the general class of polarised, non-colliding, aggregate motion.
We use motion capture data to animate objects in the environment. These data were obtained in the motion capture studio at the Essex University in England
A special hardware interface will be built which consists of 5 plateaus. The player actively controls the movement of the Flock with her own movements. Together with the other players she travels through 3 different landscapes. The group has to avoid obstacles: by jumping, rolling, crawling etc.
October 28, 2004
A tool for collaborative mobile learning
MapTribe is a software application for mobile phones. It enables group of users to see each other's position on a city map on the screen display, and to add on this shared map a certain number of objects that will positioned on the map according to the user's real position in the city.
MapTribe is based on the idea that every person retains and develops over time an image of the spaces s/he lives, which may be quite different from the physical inhabited place. It is commonly experienced, in fact, that places are enriched of psychological features that transform their perception. MapTribe is a tool for exchanging and comparing these personal images of the city among a group of friends. The tool we are developing helps people in analysing, capturing and visualizing this mental structure and to share it in a social network. From the negotiation of the differences between the maps of each participant in the group, a common understanding raises, an informal learning obtained by each user of the system about the image of the city.
Drama, Mediation, Performance and Mobile Games
Intermedia at the University of Oslo has an interesting project about mobile gaming. RE-PUBLIC is a project that deals with humanistic perspectives on emerging digital communication forms & expressions & their information systems.
What are the dramaturgical and mediational features of commercial mobile games? Do location sensitive, always online, portable gaming terminals change the dramaturgical and mediational features of commercial mobile games? How will such terminals interact with ubiquitous computing in the physical environment to create new forms for hybrid games? How and to what extent are the less pre-scripted qualities of mobile and context-related play realised textually and interpersonally? How can the humanities inform and perform ICTs and vice versa? What can the study of computer games tell us about performance and the obverse? How can performance influence the design of computer games? What kinds of performances emerge in multi-player role-playing games?
October 27, 2004
ArtFutura's theme this year is Augmented Reality. It's taking place now in Barcelona [October 28th-31st]. The programme includes Howard "Smartmobs" Rheingold, Blast Theory who will perform Can You See Me Now?, the SimpleTEXT performance, Dublin’s MediaLab Europe and Montreal’s SAT will be showcasing installations and developing experimental projects, Richard Marks, creator of EYETOY, Greyworld, Fiona Raby, etc. (via)
Saturday October 30, 2004 - 4PM - 8PM, Live from Barcelona, SAT presents Phéromones, the Art Futura's closing show. Phéromones is a network multichannel audiovisual techno performance regrouping musicians Alain Thibault, Physical Noise Theatre & Champion and VJ jocool , Ladyroll & Le couple in Barcelona as well as Yan Breuleux, Nuclear Ramjet & Johnny Ranger in Montreal.
Pheromones will connect by means of an optical fiber network of Internet 2 (providing data flows of up to 2 Gigabytes per second) the Mercat de les Flors at Barcelona and the head office of the SAT at Montreal in order to share music and images generated on both sides of the world, by means of an experimental technical support: eight audio channels and immersive multi-screens. R&D for clubbing culture.
From October 26 to 30, 2004
A team of 17 artists, researchers and technicians, as members of the Territoires ouverts - TOT - Open Territories consortium initiated by the SAT, are invited by the Art Futura festival to present their work in Barcelona.
Several of the tools developed at SAT, like the nSLAM multichannel audio application and pixelTANGO digital video mixer, will be put at contribution in order to show the know-how and the expertise of the developpers in the fields of IP streaming video and mulitichannel audio, IP telepresence (point-to-point and multi-point) and transmission of immersive audio-visual environments over IP networks.
The Open Territories project aims to foster the emergence of innovative forms of cultural expression by engaging the creativity of a new generation of artists and creators, in whose hands will be placed a host of advanced interface and networking technologies. The TOT project will be able to open the broadband network environment to talented creators who are ideally placed to illustrate broadband's potential as a means of avant-garde cultural expression.
Thursday October 28, 2004 - 10AM - 5PM
Friday October 29, 2004 - 10AM - 5PM
Saturday October 30, 2004 - 10AM - 4PM
SAT invites you and your friends to "cross the Atlantic" and project yourself at the beautiful Mercat de les Flors in Barcelona, site of the Art Futura Festival.
During the three days of the festival, using two Telepresence stations, people at both side of the Atlantic will be able to meet and speak as if they were sitting on the same table, with high audio and video resolution.
To participate and meet the catalan artistic community, Art Futura's visitors and the SAT team in Barcelone, come at the SAT Café each day between 10AM and 5PM (4PM Saturday).
October 15, 2004
"In preparation for Wednesday's Urban Gaming lecture, we thought we'd make a map. We've mapped psychogeography, nomadic communication and technologically infused public space as enablers of the urban gaming explosion. Gaming is the perfect structure for experimenting with these new tools, places and communities. The dynamics between online and offline players, person and space, space and technology, person and technology are being stress tested and observed in the form of games in media labs, art collectives and companies throughout the world. And it's evolving to be a mechanism for social activism as well- so there you go!" From community-centric.
"In preparation for Wednesday's Urban Gaming lecture, we thought we'd make a map. We've mapped psychogeography, nomadic communication and technologically infused public space as enablers of the urban gaming explosion. Gaming is the perfect structure for experimenting with these new tools, places and communities. The dynamics between online and offline players, person and space, space and technology, person and technology are being stress tested and observed in the form of games in media labs, art collectives and companies throughout the world. And it's evolving to be a mechanism for social activism as well- so there you go!" From community-centric.
October 01, 2004
PERVASIVE AND LOCATIVE ARTS NETWORK (PLAN)
THIS IS THE (PLAN)
A new international and interdisciplinary research network in pervasive media and locative media has been funded as part of the Engineering and Physical Sciences Research Council (EPSRC) Culture & Creativity programme. The network will bring together practicing artists, technology developers and ethnographers with the aim of advancing interdisciplinary understanding and building consortia for future collaborative projects.
The network will stage three major gatherings. Each gathering will have a distinct form and focus: an initial workshop to launch the network and assess the state of the art; a technology summer camp for artists and technologists, including hands-on prototyping sessions using the facilities at Nottingham's Mixed Reality Laboratory; and a major public conference and participatory exhibition as a central component of the Futuresonic 2006 festival in Manchester; as well as a supporting web site and other resources.
CALL FOR PROPOSALS - PLAN Workshop
Submissions are invited to the first of these events, a two day public workshop with papers, demos and discussion sessions. The aim of the event is to launch the network, review the state of the art, bring key players in the field together, and make initial contacts. The event will also aim to identify a range of specific interests that can lead to the formation of sub-groups within the network. Position papers and a summary report will subsequently be published on the network web site.
The workshop will take place in London over two days in the week beginning 24th January 2005. Venue and final dates announced soon.
Please send submissions to firstname.lastname@example.org by Monday 8th November.
We request that participants seek support for travel and subsistence from their institutions. For participants without institutional affiliation the network shall support applications to funding councils and foundations, please contact us for further details.
Pervasive and Locative Arts Network (PLAN) - Enhancing Mobile and Wireless Technologies for Culture and Creativity
This network will draw together computer scientists and engineers who are leading the field in developing pervasive and locative technologies; artists who are using these technologies to create and publicly deploy innovative and provocative experiences; social scientists with a proven track record of studying interactive installations and performances; industrial partners from the creative industries, spanning the arts, television, games, education, heritage, mobile computing and telecommunications sectors; and international partners who are coordinating parallel networks around the world.
The network aims to support the formation of a new interdisciplinary research community to investigate how the convergent fields of pervasive media and locative media need to evolve in order to support future cultural and creative activities. Specific network objectives are:
-To review the scope of the research that is currently being carried out in
these fields through a focused workshop, leading to an integrated state-of-the art survey paper.
-To identify the key research issues that need to be addressed in order to further develop pervasive and locative media to support culture and creativity, leading to a series of discussion white papers.
-To seed future projects by bringing artists, scientists and industry together in a creative environment so that they can generate and practically explore new ideas, and also to provide a forum for publicly demonstrating some of these.
-To produce online and offline resources to support researchers, artists, industry and to promote public understanding of this emerging field, including a public website, an online document repository for members and a newsletter and DVD.
The network will organise and support a range of activities aimed at growing a research community and generating new collaborative projects between artists and technologists. These will include staging three major research gatherings, producing online and offline resources for fellow researchers and PhD students, and outreach activities targeted at industry.
We will stage three major gatherings. Each gathering will have a distinct form and focus: an initial workshop to launch the network and assess the state of the art; a technology summer camp for artists and technologist, especially PhD students, including hands-on prototyping sessions using the facilities at Nottingham's Mixed Reality Laboratory; and a major public conference and participatory exhibition as a central component of the Futuresonic 2006 festival in Manchester. These major gatherings will be interspersed with more ad-hoc steering and reflection meetings as required by the network participants.
We will produce resources to publicise the network, encourage the exchange of perspectives and discussion, and to provide tutorial support for PhD students, artists and other researchers who wish to break into this area. These will include:
-Online resources: a public website providing access to network information including project deliverables as well as news of forthcoming calls for proposals and conferences, supported by a online document repository where members can upload documents and take part in discussion. The latter will be realised using BSCW or Project Place software.
-Offline resources: a six monthly printed newsletter and a DVD of video
The network will reach out to other researchers beyond the initial partners and also to the creative industries. This will include distribution of the newsletter and also staging a series of industry seminars, for example as part of the TI/EPSRC Outreach programme. The network research associate will also carry out a series of site visits to different partners and potential partners in order to learn more about and report on ongoing activities.
BACKGROUND IN SCIENCE AND CULTURE
A new generation of pervasive technologies is enabling people to break away from traditional desktop PCs and games consoles and experience interactive media that are directly embedded into the world around them. And locative media, the combination of mobile devices with locative technologies, supports experiences and social interaction that respond to a participant's physical location and context. Together these convergent fields raise possibilities for new cultural experiences in areas as diverse as performance, installations, games, tourism, heritage, marketing and education.
A community of researchers working in pervasive media, also known as ubiquitous computing, are exploring location awareness as a requirement for the delivery of accurate contextual information. Another community, primarily consisting of informal networks of technical innovators and cultural producers, which identifies its field as Locative Media, is exploring developments in and applications of locative technologies within social and creative contexts. One of the aims of this network is to bring these two communities together, linking academic research initiatives and agendas to key figures and ground breaking developments that are currently taking place outside mainstream academia.
The creative industries are also beginning to take up these opportunities, led by artists who are actively charting out the potentials and boundaries of the new pervasive and locative media. Other cultural sectors have also been exploring the potential of pervasive and locative media including the games industry through commercial examples of locative games played on mobile phones such as Bot Fighters and Battle Machine and also research projects such as ARQuake, Mindwarping, Pirates! and Border Guards. Researchers have also demonstrated applications in heritage and tourism, for example personal tourist guides and outdoors augmented reality displays and as well as in mobile learning experiences and participatory local history mapping projects.
A key characteristic of this research is its interdisciplinary nature, with many of these projects combining practicing artists, technology developers and also ethnographers, whose studies of early experiences that are actually delivered as public artworks have yielded new insights into the ways in which participants experience pervasive media, for example how they (and performers and technical crew) deal with uncertainty of location and connection, and, conversely, new metaphors for engaging in locative media.
However, realising the full potential of pervasive and locative media requires several further developments. First, it is necessary to expand the research community, drawing in new academic partners and also a greater range of partners from the creative industries. Second, it is important to deepen the interdisciplinary relationships between artists, technology developers and social scientists working within and between these two convergent fields. This is not only a matter of reflecting on this relationship, it is also necessary to pursue it in practice, which means forming new collaborations leading to practical projects. Third, we need to clarify and deepen the research agenda for this area, by opening up a variety of research questions, including:
-To what extent does the convergence of pervasive media and locative media signify a commonality of views, definitions and issues in each field?
-What new kinds of cultural applications will become possible through pervasive and locative media? Can we envisage new installations, performances, games and other public experiences?
-Can common design frameworks and tactics help create powerful user experiences? Can we identify and share design guidelines and generate useful abstractions, for example building on recent proposals for deliberately exploiting uncertainty and ambiguity
-What tools are required by creative users, for example that enable them to easily (re)configure an experience to work in different locations or to orchestrate it from behind the scenes. What new research challenges do these embody, for example, how do we visualise the state of the technical infrastructure-networks and sensors-96 or intervene in participants' experiences?
-What methods do researchers use to design and evaluate their experiences? We already see the use of ethnographic studies, audience discussions and even analysis of system logs; how should these be extended and can we share approaches, tools and even datasets to enhance our understanding of experience and design?
These questions, combined with the need to build a broader inter-disciplinary research community, provide the underlying motivations for this network.
Steve Benford, Nottingham (Principle Investigator)
Drew Hemment, Salford
Henk Muller, Bristol
Matthew Chalmers, Glasgow
Michael Sharples, Birmingham
Geraldine Fitzpatrick, Sussex
Christian Heath, Kings College
Jon Hindmarsh, Kings College
Ben Russell, Headmap/Locative Media Lab
Marc Tuters, Locative Media Lab
Dennis Del Favero, NSW iCinema
Steve Sayers, NESTA
Toby Barnes, EM Media
Richard Hull, HP Labs
Denny Plowman, City of Nottingham Council
Sara Diamond, Banff Centre
Andrew Caleya Chetty, Metapod
Amanda Oldroyd, BT Exact
Matt Adams, Blast Theory
Nick Southgate, Ricochet TV
Annika Waern, iPerG
Giles Lane, Proboscis
Minna Tarkka, m-cult
Carsten Sorensen, LSE
Angharad Thomas, Salford
Chris Byrne, New Media Scotland
Paul Sermon, Salford
Nina Wakeford, INCITE, Surrey
September 29, 2004
From Edward Picot's review of AVATARA's DVD\"Machinima Documentary" by Donato Mancini, Jeremy Turner and Flick Harrison (536 Productions).
Avatara, from a Canadian team called 536, is a documentary about an online community - "a global subculture who spend their lives immersed in an online 3-D voice-chat program", as the blurb explains. What makes this particular documentary special is that "every second... was recorded in-world (ie. online, within the virtual environment), so we don't meet the people directly. Instead, their voices speak to us through the puppet-like 'avatars': rabbits, pharaos, seahorses, giraffes, the grim reaper, flowers, guitars, trolls (etc) which they've chosen to represent themselves in cyberspace." Because of this immersive approach the documentary is more than just a factual report about these people and their private activities: it gives us a sense of what their world feels like from the inside.
The Invisible Train
The Invisible Train is a mobile, collaborative multi-user Augmented Reality (AR) game, in which players control virtual trains on a real wooden miniature railroad track. These virtual trains are only visible to players through their PDA's video see-through display as they don't exist in the physical world. This type of user interface is commonly called the "magic lens metaphor".
Players can interact with the game environment by operating track switches and adjusting the speed of their virtual trains. The current state of the game is synchronized between all participants via wireless networking. The common goal of the game is to prevent the virtual trains from colliding. Watch the Quicktime video. Invisible Train will be demonstrated at the International Symposium on Mixed and Augmented Reality (ISMAR 2004) in Arlington, VA.
September 26, 2004
Full Body Engagement
Whom would you like to play to get the world back into right shape? George W. Bush, Osama Bin Laden, or rather Tony Blair? Starting the computer game (T)ERROR, the player is spoilt for choice. A camera tracks the player’s movements and a projector throws the players outlines in original size on a white wall in front of him, all in realtime. When the player kicks, his shadow will kick. When the player ducks or jumps to avoid bumping into an enemy jet, his shadowy avatar will do exactly the same in real time. Watch videos.
September 18, 2004
Chased in Brighton and Barcelona
Can You See Me Now?
(from AudioHyperspace of SWR2 RadioART: Hörspiel by Sabine Breitsameter.) Highly recommended: For those who want to experience Blast Theory's live and online game "Can You See Me Now", note the following dates: From 27 September to 1 October 2004, you can take part - corporeal - in Brighton, UK, and from 28 to 31 October 2004, join the game in Barcelona during the "ArtFutura"-festival. Those who cannot make it to these places: It is also fun to join the game online. [more]
September 17, 2004
Digital Street Game
Physical Battles for Virtual Turf
Digital Street Game is a hybrid game of misadventure set on the streets of New York. It's a battle for turf, a contest of wills in short an excuse to explore the city.
Players compete for turf by performing and documenting stunts on the physical streets of New York in order to claim territory on a virtual map. Stunts are comprised of a random combination of 3 elements: 1) an object commonly found in the city (e.g. bodega) 2) a street game (e.g. stickball) and 3) a wildcard/urban situation (e.g. happy hour). Players interpret these elements as they wish, then stage and photograph their stunt in order to claim a spot on the map. The more stunts players perform the more turf they claim. But of course some players may want to compete for the same territory. In order to hold on to territory, players stunts must score high with the rest of the game community.
How to play >>
Go to the site and register, select an intersection to claim and get your randomly generated stunt. From there add your own ingredients (friends, mischief, whatever) to pull it off. Photograph your stunt in progress and upload to the site to claim your turf. Repeat as necessary.
When to play >>
The game can be played at any time. There s no official beginning or ending. In other words it's ongoing.
Stunts we love >>
September 13, 2004
"While mobile phone users are generally glad to embrace the enormous advantages of being reachable any-time and any-where, a reduced sense of privacy combined with the involuntary witnessing of anonymous people's private businesses has created a strange and sometimes awkward form of self-awareness and attention towards others. Mobile phones have transformed ordinary people into actors who narrate their most private details on the theatrical stages of train stations, restaurants, public spaces, streets, meeting areas, and any other social gathering places.
"Mobile Feelings is an artistic project that explores the ambivalence of sharing personal information with an anonymous audience. Instead of communication via voice or images to people we know, Mobile Feelings lets people communicate with strangers through virtual touch and body sensations including smell and sweat using specially designed mobile phones." Read more.
Mobile Feelings, a work by Christa SOMMERER & Laurent MIGNONNEAU, was part of Transmitter, April 21st - May 23rd, 2004; Kunsthalle Dominikanerkirche and around the city of Osnabrück.
September 08, 2004
mediating social experience
CitiTag is a wireless location-based multiplayer game, designed to enhance spontaneous social interaction and novel experiences in city environments by integrating virtual presence with physical. In the first version of CitiTag you roam the city with a GPS- and WiFi-enabled iPaq PocketPC in search for players of the opposite team that you can ‘tag’. You can also get tagged yourself if one of them gets close to you. Then you need to find a friend to free you. Urban space becomes a playground and everyone is a suspect.
September 07, 2004
Dance and Mutable Media
"I envision the development of UNSTABLELANSCAPE beyond the local performance and gross movements of the dancers towards the use of autonomous streams of data, subtle measurement of biological functions (breathing, heart beat, earth movement, GP systems and distributed performance using the internet)...I would like to explore the augmentation of the system hybridicity and maximizing its bottom-up architecture with the integration of organic tissue such as neurons and muscle cells to the real time aspect of the video-sound-movement continuum (residency at Simbiotica, Australia). Later, I will expand the performance with this bio-digital generativity and with analog/biologically inspired robots. It is a continuation of the aesthetics of emergence."
marlon barrios solano
September 02, 2004
ILoveBees Breaks Into the "Real World"
Posted by vpisteve at 11:27 PM on Alternative Reality Gaming Network: August 24, 2004. It all began at 6:07 am PDT. Payphones from coast to coast began to ring. A twelve-hour wave of carefully planned phone calls began to sweep the country, and Halo fans, Beekeepers, the Media and the just plain curious were there to try to intercept them.
This morning, we got to find out what was in store for those who were following the events of Dana and the rogue AI attacking her aunt's website. While many were expecting Halo 2 demo disks, what they got instead was one of the largest, most complicated distributed interactions in ARG history. Hundreds of people around the country descended upon over 200 locales, working as a team to answer phone calls correctly, in order to unlock a series of audio clues.
While we don't have the space here to go into the story itself (for that, check the links below), here's how the whole thing worked: Players were presented with a web page that listed 30 blocks of seven GPS coordinates, which turned out to represent payphones around the country. Each location had a specific time attached, and each block of seven locations had a common password. As the phones started ringing at the exact corresponding time, players realized that they must respond correctly to the voice on the other end in order for something to happen. It was determined that players needed to provide the nickname for the AI that was calling them, plus the password listed for their location. If successful, the web-page immediately updated, indicating success. Two successful responses per group of seven then resulted in an audio file being unlocked. Players successfully unlocked 22 of 30 files the first day, with another chance to unlock the remaining ones every 24 hours. These audio files, when assembled, are revealing a fascinating story.
And this is only the beginning!
Forums and IRC channels broke all records today. The main chat room, #beekeepers, peaked at over 300 users at one point. In addition, the press is taking notice of what's going on. Word is that G4TV's Pulse will be airing a story about ilovebees this Friday, so stay tuned.
Here's a video of one event in Georgia. (6.7 MB)
August 25, 2004
The Relational Aesthetic
From an essay written for Camerawork: A Journal of Photographic Arts by Matt Locke:
"Those stories describe three interactions. Or performances. Or moments in the production, or consumption, of an artwork. Or perhaps they are descriptions of how the production and consumption of an artwork can be reduced to the same act, the same moment. They operate within, to use Nicholas Bourriaud's term, a 'relational aesthetic' - these artworks don't rely on an encounter with a traditional art object, nor do they substitute that with some transcendent concept of a dematerialised art object. In Bourriaud's definition, these works exist within "the realm of human interactions and its social context, rather than the assertion of an independent and private symbolic space". They are moments to be experienced, not viewed, reaching out and enmeshing themselves in the messy network of conversations and relationships that make up your life." Continue reading.
August 18, 2004
coffee and avatar, please
The Chit Chat Club
It will be a real cafe, with real tables, real coffee and pastries...some of the chairs will be regular chairs;... others will be seats for avatars, equipped with monitors and network connections...There are multiple ways to interact ... in the Chit Chat Club. As in an ordinary cafe, people can walk into the physical Chit Chat Club space, order a coffee, sit down and talk with others and people watch. One can also enter the Chit Chat Club remotely through the website and occupy an avatar chair. This action opens a real-time two-way audio and graphical connection between the physical space and the online participant. Finally, a person physically present in the Chit Chat Club space may also occupy an avatar chair.
Sounds fun to me. Check out the website
August 17, 2004
networks, mobile play & big brother
In the press a lot lately, are the works of dennis crowley, focused on 'finding the intersection between location-based services, social software and user-generated content on mobile devices.' His work includes dodgeball.com, a live, mobile-based friend finder, Pac Manhattan, where physical players do the PacMan thing around Washington Square Park, and Big Brother Foosball, a foosball game that displays your SS number while you play. [Posted by Nathaniel Stern]
August 02, 2004
Pedestrian Cinematic Experience
Pedestrian by Shelley Eshkar and Paul Kaiser
February 13 - March 23, 2002:
Pedestrian is a public art project of site-specific, projected installations presented simultaneously in three locations in New York City - in the galleries at Eyebeam in Chelsea and at two outdoor venues; at Rockefeller Plaza in Midtown, and The Studio Museum in Harlem.
Created by artists Shelley Eshkar and Paul Kaiser and co-produced by Eyebeam and Art Production Fund, Pedestrian presents miniature, computer-generated moving figures projected directly onto the ground from above, providing a bird's-eye view of pedestrian traffic. These projected "pedestrians" interact to form patterns, crowds and streams that evolve unpredictably, almost organically, as if having lives of their own. Figures stand, watch, meet, sit, push, move away, sometimes run, or perhaps even lie down according to spontaneous rules of motion and engagement. Pedestrian builds on advanced technology, combining motion-capture, 3-D modeling software, texture mapping to visually enrich surface detail, behavioral rules devised by the artists to drive the action, and an overarching cinematic framework.
Pedestrian depicts a solid and almost tactile world within the pavement you walk upon. It's medium is that of a projected image directly onto the pavement. Instead of a sober rectangle, spectators form a dynamic human perimeter around the work. These are conscious choices to emphasize the work's physicality, and to have our own presence essentially complete the artwork; we and Pedestrian generate a unique crowd footprint in the real world.
By projecting Pedestrian onto public sidewalks, the artists experiment with cinematic experience in the context of public sculpture. Viewers see a bird's eye view of 3D modeled plazas and figures that are mapped with texture samples gathered and scanned from the real world. All movement is built from a library of motion-captured data that is mapped onto synthetic characters. The final projection shows a miniature depiction of real - life motions and the daily public interactions of figures representing various urban archetypes. The distorted foreshortening of an aerial perspective of doll-like figures and plazas are suggestive of surveillance or video games in which we play out constantly changing narratives and dialogue that will continue outside of our current viewpoint
Featured in the ZKM book Future Cinema.
August 01, 2004
Mixed Reality in Los Angeles
34 North 118 West plays through a Tablet PC with Global Positioning System receiver and headphones. GPS tracks your location to determine how the story unfolds - in real time, in real space, as you traverse the sidewalks of Los Angeles.
Description of Tour: Imagine walking through the city and triggering moments in time. Imagine wandering through a space inhabited with the sonic ghosts of another era. Like ether, the air around you pulses with spirits, voices, and sounds. Streets, buildings, and hidden fragments tell a story. The setting is the Freight Depot in downtown Los Angeles. At the turn of the century Railroads were synonymous with power, speed and modernization. Telegraphs and Railroads were our first cross-country infrastructures, preceding the Internet. From the history and myth of the Railroad to the present day, sounds and voices drift in and out as you walk. Read Narrative Archaeology by Jeremy Hight.
Posted by michelle at 06:07 PM
Two women in gear are on the ground. One with a laptop and the other with a projector pointing onto building walls in 3 key locations in the city. They are connected through a mobile wireless bicycle to an online team of five game players located around the world. They intervene on servers in a popular online military simulation game with performance actions carried out by the whole team. The live projections in the city can also be viewed through a web cam on the OUT website.
OUT takes its name from MOUT a military term for Military Operations in Urban Terrain. Many military simulation computer games implement MOUT. For example, the US Army developed game Full Spectrum Warrior trains gamers in MOUT combat. OUT is a criticism of the increasing militarization of civilian life which has been implemented in the US and elsewhere since 911. The Patriot Act, surveillance of public libraries, and the increased powers of government to hold citizens in military custody without trial are instances of these increased powers of government instigated by the Bush administration. In an endless spiral war of terror a government is at war with its own citizens, with soldiers in the midst of the fabric of ordinary life, as has been the case for a while in terror alert cities like Tel Aviv and Jerusalem.
OUT is an artistic intervention in the public space of online games and cities. OUT is also happening at a moment when the street has become again a viable mode of expression. From Seattle to Cancun activists are using wireless technologies and the web to organize actions and and congregate. Dada-like agenda-less mobs have appeared in New York and spread to other cities, In contrast, police at these events are increasingly high tech and cyborgean. They wear the latest riot gear shields and padding and use high tech wireless communication systems. They move in sophisticated crowd control formations. OUT, on the other hand, is a high tech art intervention, art activists with high tech gear and strategy. OUT of the closet of MOUT.
Republicans OUT of New York. The United States OUT of Iraq and the Middle East. Escalating worldwide Militarism and Violence, from whatever source, (right wing oil hungry U.S. capitalists or wealthy Islamic fundamentalists), OUT of Civilian Life. The U.S. Army and Pentagon computer game developers OUT of the minds of prebuscent gamers.
July 31, 2004
From Hactivism to Tactical Media
Hacktivism (electronic resistence within the network) has fed into tactical media: urban, mobile performance events.
Being 'wired' has become mobile, ubiquitous, sentient, pervasive, OMNICIENT monitored, computing. As we desire our movies and games to be more realistic - hyper-realistic - we now inhabit our computer-mediated, if not computer-generated worlds. We interact through joysticks that exert pressure to our response. We track oursleves with GPS-equipped devices, we pass and receive data streams: enveloped, engaged, connected, linked.
There's a surge in public events - happenings - Kaprow's "Environmental Theatre" of the collective, enabled by wi-fi technology and taken to the streets: MOBILized.
From: The ABC of Tactical Media by David Garcia and Geert Lovink
"In fact we introduced the term tactical to disrupt and take us beyond the rigid dichotomies that have restricted thinking in this area for so long: dichotomies such as amateur vs professional, alternative vs mainstream. Even private vs public. Our hybrid forms are always provisional. What counts are the temporary connections you are able to make. Here and now, not some vaporware promised for the future. But what we can do on the spot with the media we have access to. Here in Amsterdam we have access to local TV, digital cities and fortresses of new and old media. In other places they might have theater, street demonstrations, experimental film, literature, photography. Tactical media’s mobility connects it to a wider movement of migrant culture."
O.U.T.: Operation Urban Terrain :a live action wireless gaming urban intervention
When: August 30, during the Republican National Convention, New York City.
Two women in gear are on the ground. One with a laptop and the other with a projector pointing onto building walls in 3 key locations in the city. They are connected through a mobile wireless bicycle to an online team of five game players located around the world. They intervene on servers in a popular online military simulation game with performance actions carried out by the whole team.The live projections in the city can also be viewed through a web cam on the OUT website.
Magicbike is a mobile WiFi (wireless Internet) hotspot that gives free Internet connectivity wherever its ridden or parked. By turning a common bicycle into a wireless hotspot, Magicbike explores new delivery and use strategies for wireless networks and modern-day urbanites. Wireless bicycles disappear into the urban fabric and bring Internet to yet unserved spaces and communities. Mixing public art with techno-activism, Magicbikes are perfect for setting up adhoc Internet connectivity for art and culture events, emergency access, public demonstrations, and communities on the struggling end of the digital-divide.
July 30, 2004
On Liveness and Presence
What is Live Art?
This from the UK: "Unlike with opera, people often ask, 'What is Live Art?' The answer is people have been experiencing it for the last hundred years, from the earliest events at the Cabaret Voltaire where DADA was born to the happenings of the Sixties which influenced Andy Warhol, and in the 21st century this tradition continues to mutate."
Today Live Art is witnessed at perfomances by Vanessa Beecroft or The Battle of Orgreave by Jeremy Dellar. Live Art is an interrogation of the aesthetics of presence. Live Art is about being there.
"Live Art mixes genres and diverse cultural experiences to express the complex organisation of contemporary life. It often involves work which questions commonly held assumptions. Live art is the trash-can of culture recycling and celebrating the debris of the modern world."
: In 1965 Kaprow explained his evolution from collage to environments and happenings: "The action collages then became bigger, and I introduced flashing lights and thicker hunks of matter. These parts projected further and further from the wall into the room, and included more and more audible elements: sounds of ringing buzzers, bells, toys, etc., until I had accumulated nearly all the sensory elements I was to work with during the following years...." His works expanded until they filled the gallery, creating an integrated environment for the spectator. "I immediately saw that every visitor to the environment was part of it. And so I gave him opportunities like moving something, turning switches on -- just a few things. Increasingly during 1957 and 1958, this suggested a more 'scored' responsibility for the visitor. I offered him more and more to do until there developed the Happening...The integration of all elements -- environment, constructed sections, time, space, and people -- has been my main technical problem ever since."
Kaprow's progress did not occur in a vacuum, as he readily acknowledges. Robert Rauschenburg, Claes Oldenburg, and Jim Dine, among others, were also working on theatrical pieces, although they soon returned to more traditional areas; In Europe, Wolf Vostell and the Fluxus Group, and in Japan, the Gutai Group, were all investigating similar directions.
The Theory of Happenings: The Happening, as Kaprow developed it, is a non-verbal, theatrical production that abandons stage-audience structure as well as the usual plot or narrative line of traditional theatre. Although a compartmented organization may be used, the performers are considered as objects -- often kinaesthetically involved -- within an overall design of environment, timing, sound, colour and light. Found environments are often used and built upon, but the events are not casually arrived at, nor are they entirely accidental and spontaneous.
His view was that the Assemblage was to be "handled and walked around", the environment" was to be "walked into", but the happening was to be a genuine "event" involving spectator participation and no longer confined to the museum or gallery. Performers are encouraged to capitalize upon unplanned occurrences while acting out fantasies based on real life within a certain roughly pre-ordained structure that suggests symbolic and universally basic themes and meanings. A field of aesthetic operation is thus created in relation to life, combining artfully determined materials with strong associational properties, and dimensions with events and things from the sphere "outside" of customary definitions for art.
In the following networked performance types we directly see the influence of happenings particularly in the emergence of wireless network enabled Urban Gaming such as the Go Game. Go Game started off as a bunch of digerati having fun with their handhelds. Then it started becoming popular so the developers began marketing their services as event producers to corporations. Now their website has this decidedly corporate slant.
Pac-Manhattan was developed out of the NYU Interactive media program...possibly interesting if it doesn't fizzle out....
Urban Challenge, urrr I guess it's now Verison Urban Challenge. Enough said.
FlashMobs have matured to smart mobs.
GeoCaching is an entertaining adventure game for GPS users. Participating in a cache hunt is a good way to take advantage of the wonderful features and capability of a GPS unit. The basic idea is to have individuals and organizations set up caches all over the world and share the locations of these caches on the internet. GPS users can then use the location coordinates to find the caches. Once found, a cache may provide the visitor with a wide variety of rewards. All the visitor is asked to do is if they get something they should try to leave something for the cache.
July 27, 2004
More about Blast Theory
Matt Adams of Blast Theory posted a comment of encouragement to the blog's opening and I'd like to follow up with more about Blast Theory and a call out to Matt to respond to some specific questions about their work.
Blast Theory, comprised of Matt Adams, Ju Row Farr, and Nichols Tandavanitj, describe themselves as 'an artist group whose work explores interactivity and the relationship between real and virtual space with a particular focus on the social and political aspects of technology. It confronts a media saturated world in which popular culture rules, using video, computers, performance, installation, mobile and online technologies to ask questions about the ideologies present in the information that envelops us.'
This is evident in a survey of their work over the past decade. Formed in 1991, a chronology of their work from 1994-2004 is available on their website where the themes and explorations of technology range from video in live performance to interactive installations with the most recent using a pda or 3G phone to enable a mixed-reality game-based experience.
Beginning with their 1999 work Desert Rain and continuing on to the mixed reality works Can You see Me Now (project website), Uncle Roy All Around You (project website), and I Like Frank (project website) they have collaborated with the Mixed Reality Lab (MRL), an interdisciplinary research initiative at the University of Nottingham which brings together leading researchers in Computer Science, Engineering and Psychology to research new technologies that merge the physical and digital worlds, focusing on playful, artistic and educational applications.
The projects with MRL are clearly interactive in their game structures and use of mobile, networked technologies. The hybrid online/live action approach was explored in Kidnapped (1998) and interactivity has been a component in the early work since Stampede, (1994) in which the audience triggered how the piece unfolded via pressure pads.
Their work has been described as 'somewhere between theatre, performance art, installation and club culture' by Lois Keidan, Director of Live Arts, Institute of Contemporary Arts, London. In reviews of Uncle Roy in 2003 a direct link to the content and approach of their work can be found - a group of friends with various creative backgrounds get together to form a theater company. This comes through in Matt's initial comment to the blog in which he claims his excitement for recent developments in networked_performance, despite the backward looking stance of many traditionally moored theater and dance practitioners. Blast Theory is freed from any prior history in a single discipline by having a group comprised from various practices across which they draw to create their work.
Matt, firstly, correct any inaccuracies in the above. Would you tell us what are the backgrounds of the members and elaborate on the evolution of the group from the early years till now. What led to the formation of the group and why theater? It seems that BT has always had an interest in media and technology born of club culture perhaps. What is the importance of the 'live' and 'performance' aspects of your work? As opposed to addressing the same social and political themes about technolgical impact on culture by focusing solely on interactive installations, such as in An Explicit Volume, for example.
How did the collaboration with Steve Benford and MRL evolve? How has the relationship with MRL influenced BT's work conceptually? You might also elaborate on how the collaboration has enabled your work to evolve technologically as well. Can you speak about the offshoot of the BT / MLR collaborative endeavor, EVERPRESENCE and your project Vicinity to which you've been able to apply these joint efforts? It appears to include aspects of the mixed reality game-based works and a reality-tv episodic, perhaps. What's the timeline for presenting this work?
You also mentioned in your post that there is lots of interesting work happening in networked_performance. What's out there that you currently find interesting and why?...