Think back to your favourite video games and CG-animated movies. Ever remember seeing a character put on trousers, or slip on a jacket? They might seem like mundane actions not entirely worthy of an animator's time, but the reason for their absence is actually more technical than artistic. While things like cloth, hair, and water are typically generated with physics simulations, character movement is animated manually. Getting the two different systems to interact naturally with each other is where the challenge lies.
Researchers from Georgia Tech think they've come up with a solution to the problem, designing a system that allows a virtual human character to put on various types of garments in a natural way. The system—which is discussed in detail in their paper "Animating Human Dressing" released earlier this year (PDF)—consists of three main components: a "primitive" action editor, a dressing controller, and a cloth simulator. The system works by feeding it a reference motion—usually via motion capture (mo-cap)—after which the user assembles a sequence of actions to describe the reference motion using the primitive action editor.
An example would be putting an arm into a sleeve, which the researchers say could be described as first aligning the hand with the armhole, and dragging the cloth up the arm from the wrist to the shoulder. The primitive actions are parameterised building blocks for creating various dressing animations, with path planning algorithms considering the state of the physics-based garment only at the brief moments the team identify as crucial to completing the action.
No comments:
Post a Comment