It's SIGGRAPH time and I'm here in Boston getting ready to host three sketch sessions ("People, Puppets, and Pillows," "Fast and Cheap," and "About Face") in addition to various NVIDIA events all week. As always I'm booked for multiple things at once most all day every day, and there's lots to see and learn and sometimes share.
At this moment I'm sitting in the sketch audience watching a terrific presentation on the facial action capture used for King Kong (short version: they built a solver for their facial mocap that reduced the many XYZ motion-capture tracks into tracks along the axes of FACS (Facial Action Coding System) (axes in this space correspond to human expressions, and to a strong degree those expressions correspond to specific immediate emotional states); then remapped them to a roughly-corresponding set of thier own created "gFACS" gorilla FACS which gave really terrific high-quality results. As a bonus, the FACS tracks are a lot more intuitive for subsequent tweaks and revisions than just trying to manipulate dozens of scattered XYZ tracks. Cool!)