« Andrew Keen: | Main | Placeholder: Voiceholders and Voicemarks »

July 11, 2007

Avatar News: The Virtual Teacher +

uic_researchers_avatars.jpg

Future avatars will be adept at manipulating human response

"Researchers from Illinois and Florida are developing a networking system which will create virtual representations of real people to improve our knowledge. They will use artificial intelligence and natural language processing software to enable us to interact with these avatars. The goal of the project, sponsored by the National Science Foundation (NSF), is to give us the possibility to interact with these virtual representations as if they were the actual person, complete with the ability to understand and answer questions. We should see the results at the beginning of 2008 — if the researchers succeed." [via]

"The current technology for creating computerized avatars for human interactions is relatively primitive; we tend to be surprised if a computerized representation can perform even vaguely human behaviors. But in the latest issue of Science, Judith Donath of MIT's Media Lab argues that this situation is likely to be temporary. She suggests that as programmers respond to the demand for more realistic human behavior in avatars, they will necessarily create the technology to manipulate human trust via the results.

Donath notes that even seemingly simple human behaviors are accompanied by collections of body language and expressions that can reinforce or undercut the messages we intend to send. For instance, she suggests that we signal our intention to engage someone in conversation by a complex suite of gestures, mostly nonverbal: "You carry out this goal not only by walking across the room but also by making eye contact, smiling, raising your brows, adjusting your clothes—a complex set of communicative behaviors that indicate your intention to start a conversation, allow you to gauge his willingness to do so, and show your level of determination."

Right now, even the most sophisticated avatars accomplish only a small subset of these behavioral collections. But that's beginning to change, at least within the research community. Although putting all of the components of these behaviors under user control is viewed as too complex, Donath cites work in which entire suites of behavior could be controlled by a single command. For example, an avatar commanded to end a conversation can nod its head, wave, and break eye contact. Users of such systems found them natural and more engaging, and they found their conversation partners to be more expressive.

Although these sorts of advances may make for a more appealing virtual experience, Donath suggests that they have some disturbing implications for issues of trust and credibility when future avatars are used for communication. She notes that we interpret many behavioral collections in light of what they tell us about the person who is doing the talking. For example, we tend to view someone who doesn't make eye contact as more likely to be lying, providing uncertain information, or simply uninterested in talking with us. A well-programmed avatar can be commanded to engage in behaviors that simulate honesty, regardless of whether the speaker is trustworthy.

Research is also revealing that other factors play into an avatar's trustworthiness and credibility. For example, simply making an avatar appear more human (including providing it with a clear gender) caused them to be rated more trustworthy. Other research has shown that trust can also be manipulated via more subtle techniques. Teams of people paid greater attention to an avatar that was created with a "team face," one that combined features from the members of the team. Individuals found political messages more persuasive when they were delivered by an avatar with a subtle resemblance to the listener's own face.

Donath specifically raises the disturbing possibility of, "a world in which you are bombarded with oddly compelling ad campaigns presented by people just like you." But she also suggests that current trends, if they continue, may leave us with avatars tailored to the contexts where they're used. After all, we demand detailed and elaborate avatars for our online fantasy games but will happily accept far less for what's billed as our "Second Life." It's possible that we'll actually wind up demanding less sophisticated avatars to deliver the most critical information in order to avoid the potential for manipulation. But only, of course, if we're aware of the danger." Future avatars will be adept at manipulating human response by John Timmer, Ars Technica. Related: Our avatars, ourselves.

Posted by jo at July 11, 2007 05:56 PM

Comments