What is anthropomorphism?
I think a lot of times when roboticists talk about anthropomorphism, it has an air of cynicism. Like “oh those silly people, they think the robot has thoughts and feelings.” Which is obviously not true for most people, but it’s interesting to think about why people have the propensity to anthropomorphize, and to understand what, if anything, we should do to design for this.
Dan Dennett’s Intenional Stance offers some insight. It explains that while people realize that some thing may not actually have desires and intentions, it ends up being an efficient and successful way to reason about a lot of phenomena that we come across. Put another way, as social animals, humans became very good at reasoning about mental states (beliefs, desires, intentions) to predict the actions of other animals. Human’s applied this reasoning strategy to anything that produced some self-motivated action that could not be described by physics. These days it’s not just animals, there are a lot more things in the world that produce seemingly self-motivated actions that are not (on the surface) describable by physics (cars, computers, robots, …); therefore, the mental state reasoning strategy kicks in.
Don Norman has a similar stance and has long studied how an object’s design and appearance communicate and inform people of the object’s possible functions. In a recent book he takes this into the realm of robots, arguing that anthropomorphism is an abstraction that will help people understand how to interact with a robot, and that robots should use familiar mechanisms like emotive expressions to communicate internal state to a human user.
Maximizing anthropomorphism
The field of animation has a long history of maximizing anthropomorphism. Frank Thomas and Ollie Johnston wrote a beautiful book in 1981, The Illusion of Life, about the principles and process employed in some of the Disney classic animations. In a SIGGRAPH paper in the late 80s, Pixar’s John Lasseter describes how the Disney principles of creating the “Illusion of Life” in 2D animation should translate to 3D animation. In the paper he steps through the 11 fundamental principles of traditional animation and discusses their 3D corralate, arguing that many of the principles transcend the particular medium. Which to me begs the question….How can these principles of animation inform how we create of an “Illusion of Life” in a robot?
Animation Principles for Robots
I’ll mention six of the principles (interpreted pretty loosely) that I find most directly applicable and interesting for thinking about robot behavior design. Most of these can be summed up as: It’s not just what you do, but how you do it. In robotics, I think we tend to work on how to select a particular motor control program, action, or behavior (selecting what to do). These principles of animation argue for having additional mechanisms to dynamically control parameters of how you do it.
Appeal — Creating a design or an action that the audience enjoys watching. This is what usually comes to mind when people talk about robots and anthropomorphism. It is the principle that would argue for creating robots with baby or pet-like proportions, so they are inherently appealing and nice to watch.
Anticipation — The preparation for an action. This is the technique of doing something to prepare the audience for the action that is about to come. Directing their attention to the right part of the screen. I think the best example of this is the “wheely feet” that cartoon characters get before taking off running. Or making sure that an arm movement winds up before exerting some energy. These are definitely cues that robots of all morphologies could use to help a human partner understand more about what they are “about” to do. It would make people feel more comfortable if they felt they could accurately predict the robots behavior. Thus, I think cues to help with audience/user anticipation are important. I’m not currently working with any mobile robots, but if I was…I would be making them some “wheely-feet”!
Arcs — The visual path of action for natural movement. Many of these principles argue against motion control that is simply based on an efficient path, or some inverse kinematics. Arcs, refers to the fact that even in a reaching action or something where the end point is the goal, there is something about the path that is taken to the end point that can make the motion more natural (and, I would hypothesize more predictable. This could ease interaction because a person would find the robot’s actions more intuitive to follow).
Secondary Action — The action of an object resulting from another action. So, the part about objects moving in relation to a character’s actions, robots get for free with physics. But a more subtle aspect of secondary action is within the character’s body. As one example, when you nod your head, it is more than just your neck bone moving. A natural head nod will have secondary actions in some of the robot’s facial features (e.g., if it has eyebrows, or ears). These secondary actions, while not instrumental to the getting from A to B goal, create a more natural looking behavior.
Timing — Spacing actions to define the weight and size of objects and the personality of characters. Robots can certainly use this one. Slow motions communicate something different than fast motions. Thus, not just getting the hand from A to B, but again, it’s about how you get there with a particular speed.
Follow Through and Overlapping Action — The termination of an action and establishing its relationship to the next action. I think this relates most to the prior point about anticipation. Not only does a single action need to have “set up,” but all of the robot’s actions have to make sense together. One is the anticipation of the other. And importantly, a particular action or behavior might terminate differently depending on what action or behavior is coming next.
No comments:
Post a Comment