Home Research Publications Software Teaching Resume Gallery Japan


Fast Synthetic Vision, Memory, and Learning Models
for Virtual Humans

James Kuffner, Jr.
Jean-Claude Latombe

Robotics Laboratory
Department of Computer Science
Stanford University
Stanford, CA 94305, USA

 

Abstract

This paper presents a simple and efficient method of modeling synthetic vision, memory, and learning for autonomous animated characters in real-time virtual environments. The model is efficient in terms of both storage requirements and update times, and can be flexibly combined with a variety of higher-level reasoning modules or complex memory rules. The design is inspired by research in motion planning, control, and sensing for autonomous mobile robots. We apply this framework to the problem of quickly synthesizing from navigation goals the collision-free motions for animated human figures in changing virtual environments. We combine a low-level path planner, a path-following controller, and cyclic motion capture data to generate the underlying animation. Graphics rendering hardware is used to simulate the visual perception of a character, providing a feedback loop to the overall navigation strategy. The synthetic vision and memory update rules can handle dynamic environments where objects appear, disappear, or move around unpredictably. The resulting model is suitable for a variety of real-time applications involving autonomous animated characters.

 

1997 - 2006 © James Kuffner, Jr.