Motion capture provides the artist with interpolation curves for all the bones in a simplified human skeleton. The resulting data can be both manipulated and weaved with other interpolation data from a variety of sources. In the first example that I created the firefighter leans forward while reaching out (keyframed) before being blown back (rigid body simulation).
The example above demonstrated a simple switch from a motion I keyframed to a dynamic simulation that was generated. This next example will demonstrate more of a weaving between keyframed, simulation, and motion capture performance data.
The ability to expand a motion capture performance beyond the motion capture stage and the limitations of the human performer while maintaining a sense of realism and fluidity is without limit.
Showing posts with label Performance Capture. Show all posts
Showing posts with label Performance Capture. Show all posts
Tuesday, December 11, 2007
Monday, December 10, 2007
Deconstructing Character Choreography and Thoughts On How Motion Capture Is Receive In The Dance Community
Margot discusses some interesting thoughts on communicating gesture and motion in her dance students. She hopes her students will plan to choreograph their own performances by de-constructing their dance choreography to express their "Movements as Sentences, and your Steps as Words" to convey, successfully, the message of their character gestures and motions through dance performance.
Margot also discusses her experiences with working with Motion Capture technology to create an interacting dance performance between motion captured character performers and live action dancers performing together and how her experiences have shown that many dancers feel hesitant or reluctant to share the stage with a screen projected motion capture character, performing the same dance choreography.
Margot also discusses her experiences with working with Motion Capture technology to create an interacting dance performance between motion captured character performers and live action dancers performing together and how her experiences have shown that many dancers feel hesitant or reluctant to share the stage with a screen projected motion capture character, performing the same dance choreography.
Monday, December 3, 2007
Applying Data
Applying motion capture data is relatively simple. After capturing the data during the shooting process, and reconstructing the data (filling interpolation gaps), one only needs to take the resulting c3d data and apply it to a specifically named character skeleton. In the example I created (above) it shows a character comprised of 2D surfaces being driven by a skeleton with motion capture data applied.
This example shows the same character holding a torch with a fire particle emitter parented to the tip.
Capturing Data
This is an example of a motion capture shoot. As you can see I'm wearing a suit with markers. I'm standing in a volume surrounded by twenty 4-mega pixel cameras that are shooting 120FPS. When shooting motion capture performances, the performer assumes a T-pose before and after the performed action to aid the next process which is data reconstruction. As you can hear I like to make my own sound effects to enhance the performance.
Maks Naporowski & Motion Capture
During Maks Naporowski's presentation regarding motion capture he made two key points:
1. (Patriot) - When in the pursuit of authentic motion use performers who are trained to move in the desired fashion.
2. (Spiderman) - Despite the design / limitations of the character being driven by the motion capture data, the data will indicate the source performer's design / limitations. Spiderman was keyframed because the motion needed to be super-human and nothing less.
Monday, November 26, 2007
Madame Tutli Putli & Performance Capture
Continuing on the subject of the threshold between having character and being a character (as well as the emotional connection that determines what we perceive as a character), Madame Tutli Putli is a really neat example of a mixed media character design. This is a stop-motion film, but the eyes of the main character were composited in from footage of a live actor who was recorded after the animation. The result is a very unique, often creepy, look.

This process is a unique form of performance capture in which the performance occurs only after the animation, and is probably why this film has avoided the controversy that typically surrounds optical motion capture such as Beowulf.
But in a sense, Madame Tutli Putli is much closer to reality than optical motion capture, simply because there are no markers, but the actual actor footage is used. Moreover, the eyes are arguably the most important component of facial animation. Certainly, the footage is used very creatively, but there are a lot more adjustments and tweaks to the data in the optical mocap pipeline.
One key difference lies in the role of the animator. In Madame Tutli Putli, the animator's performance drives the actor's contribution, whereas in typical mocap productions, it is the actor's performance that is most important. A subtle difference, but one that appears to be central to the mocap debate.

This process is a unique form of performance capture in which the performance occurs only after the animation, and is probably why this film has avoided the controversy that typically surrounds optical motion capture such as Beowulf.
But in a sense, Madame Tutli Putli is much closer to reality than optical motion capture, simply because there are no markers, but the actual actor footage is used. Moreover, the eyes are arguably the most important component of facial animation. Certainly, the footage is used very creatively, but there are a lot more adjustments and tweaks to the data in the optical mocap pipeline.
One key difference lies in the role of the animator. In Madame Tutli Putli, the animator's performance drives the actor's contribution, whereas in typical mocap productions, it is the actor's performance that is most important. A subtle difference, but one that appears to be central to the mocap debate.
Wednesday, November 14, 2007
Automated Anticipation?
I came across a paper entitled "Anticipation Effect Generation for Character Animation". Basically, these researchers were looking at automated ways to add anticipation to existing animation (presumably 3d animation). It's an interesting notion, and seems to tackle the subtleties of animation from the opposite perspective as motion capture. This technique aims to make it easier to improve keyframe animation. Most animators would object to this approach on the grounds that anticipation is subjective and not easily derived automatically. The researchers found the same thing - they could not automate the duration of the anticipation and needed human intervention.
Here's the abstract:
"According to the principles of traditional 2D animation techniques, anticipation makes an animation convincing and expressive. In this paper, we present a method to generate anticipation effects for an existing animation. The proposed method is based on the visual characteristics of anticipation, that is, “Before we go one way, first we go the other way [1].” We first analyze the rotation of each joint and the movement of the center of mass during a given action, where the anticipation effects are added. Reversing the directions of rotation and translation, we can obtain an initially guessed anticipatory pose. By means of a nonlinear optimization technique, we can obtain a consequent anticipatory pose to place the center of mass at a proper location. Finally, we can generate the anticipation effects by compositing the anticipatory pose with a given action, while considering the continuity at junction and preserving the high frequency components of the given action. Experimental results show that the proposed method can produce the anticipatory pose successfully and quickly, and generate convincing and expressive anticipation effects."
The entire paper can be found at:
http://www.springerlink.com/content/h42451j2j38l5216/ and the pdf is at
http://www.springerlink.com/content/h42451j2j38l5216/fulltext.pdf
(You may need to be on USC campus to be able to access the links)
Here's the abstract:
"According to the principles of traditional 2D animation techniques, anticipation makes an animation convincing and expressive. In this paper, we present a method to generate anticipation effects for an existing animation. The proposed method is based on the visual characteristics of anticipation, that is, “Before we go one way, first we go the other way [1].” We first analyze the rotation of each joint and the movement of the center of mass during a given action, where the anticipation effects are added. Reversing the directions of rotation and translation, we can obtain an initially guessed anticipatory pose. By means of a nonlinear optimization technique, we can obtain a consequent anticipatory pose to place the center of mass at a proper location. Finally, we can generate the anticipation effects by compositing the anticipatory pose with a given action, while considering the continuity at junction and preserving the high frequency components of the given action. Experimental results show that the proposed method can produce the anticipatory pose successfully and quickly, and generate convincing and expressive anticipation effects."
The entire paper can be found at:
http://www.springerlink.com/content/h42451j2j38l5216/ and the pdf is at
http://www.springerlink.com/content/h42451j2j38l5216/fulltext.pdf
(You may need to be on USC campus to be able to access the links)
Tuesday, November 13, 2007
Beowulf's Cousin
This is an example of motion capture that I did this semester at USC using a live performer.
The clip illustrates two things: first, mocap can produce very subtle and natural movement when it comes to body motions. Second, as demonstrated by the problems with the character's arm when he falls down, motion capture is prone to problems due to occlusion (i.e. the markers are not visible by enough cameras), and data needs to be manually reconstructed by an animator.
The clip illustrates two things: first, mocap can produce very subtle and natural movement when it comes to body motions. Second, as demonstrated by the problems with the character's arm when he falls down, motion capture is prone to problems due to occlusion (i.e. the markers are not visible by enough cameras), and data needs to be manually reconstructed by an animator.
Monday, October 29, 2007
History of Performance Capture
1. History of Motion Capture (1970-1994)
Highlights:
Highlights:
- 1970's Disney's Snow White - Rotoscoping
- 1980's MIT's Graphical Marionette, Real-time CG puppets (Mike, Waldo), optical Dozo (tape-markers).
- 1990's Real-time CG puppets (Mat, Mario, Alive!), Acclaim's 4-camera optical system for video games.
Wednesday, October 24, 2007
Performance Capture
There are many different ways to capture a performer's motion. Here is a list of the current methodologies:
- Optical: multiple cameras triangulate the position of spherical markers on the body
- Inertial: sensors (accelerometer, gyroscope) on the body transmit wirelessly to PC
- Mechanical: performer wears an exo-skeleton and measures motion with mechanical parts
- Magnetic: position and orientation of performer derived by the relative magnetic flux
- Computer Vision Based: captures motion in a volume without sensors.
- Electroculography: captures eye movement with electrodes, and sensors placed around the eyes.
- Mova Contour System: cameras capture movement by evaluating patterns in phosphor makeup on the actors face.
- Fun Links:
- Dead Man's Chest: http://vfxworld.com/?sa=adv&code=1e242f07&atype=articles&id=2941
- Final Fantasy: http://arstechnica.com/wankerdesk/01q3/ff-interview/ff-interview-1.html
- Polar Express: http://vfxworld.com/?sa=adv&code=1e242f07&atype=articles&id=2289
- Monster House: http://vfxworld.com/?sa=adv&code=1e242f07&atype=articles&id=2733
- Lord of the Rings: http://vfxworld.com/?sa=adv&code=1e242f07&atype=articles&id=1974&page=1
- King Kong: http://www.vfxworld.com/?sa=adv&code=319b255d&atype=articles&id=3106
Thursday, September 27, 2007
Margo Apostolos Bio
Margo is both a dancer and a roboticist (!). Seems like she will be a really interesting resource for trying to understand how we perceive and interpret gesture.
I found a neat article that she wrote about robots choreography: http://www.jstor.org/view/0024094x/ap050073/05a00110/0
Since the article is on JStor for which you need a subscription, you may need to be on campus to access the above link (USC provides free access).
Here's her bio from the USC website:
Dr. Apostolos has authored and presented numerous articles on her research and design in Robot Choreography. In addition to her doctoral and post-doctoral studies at Stanford University, she earned an M.A. in Dance from Northwestern University. She has served as visiting professor in the Department of Psychology at Princeton University and has taught in Chicago, San Francisco, at Stanford University, Southern Illinois University and California Polytechnic State University-San Luis Obispo. A recipient of the prestigious NASA/ASEE Summer Faculty Fellowship, Dr. Apostolos worked for NASA at Jet Propulsion Laboratory/Caltech as a research scientist in the area of space telerobotics. At the Annenberg Center for Communication, Dr. Apostolos conducted research on facial expressions and human-computer interactions. Her work in this area continues in collaboration with the USC neuroscience program. She was named a Southern California Studies Center faculty fellow, where she is surveying theatre and dance in the Southern California region. Currently, Stanford University is providing funding for Dr. Apostolos’ work in the area of “Dance for Sports” in collaboration with the Stanford Athletic Department. This work was presented to the International Olympic Committee in Sydney, Australia, for the 2000 Games and in preparation for the Salt Lake City Games in 2002. Dr. Apostolos developed the Dance Minor program in Theatre, directs the dance concert each semester, and coordinates the Open Gate Dance Program. In 2004, she presented at the Athens IOC meeting on Dance for Sports and at the World Congress of Dance. This past spring, she was instrumental in bringing internationally-renown director/choreographer Mark Morris to campus for a workshop that integrated motion capture technology and robotics with modern dance. Most recently, she co-founded the Cedars-Sinai/USC Dance Medicine Center, a specialized treatment center – the first of its kind in Los Angeles – offering preventive care and treatment specifically designed for professional and recreational dancers.
I found a neat article that she wrote about robots choreography: http://www.jstor.org/view/0024094x/ap050073/05a00110/0
Since the article is on JStor for which you need a subscription, you may need to be on campus to access the above link (USC provides free access).
Here's her bio from the USC website:
Dr. Apostolos has authored and presented numerous articles on her research and design in Robot Choreography. In addition to her doctoral and post-doctoral studies at Stanford University, she earned an M.A. in Dance from Northwestern University. She has served as visiting professor in the Department of Psychology at Princeton University and has taught in Chicago, San Francisco, at Stanford University, Southern Illinois University and California Polytechnic State University-San Luis Obispo. A recipient of the prestigious NASA/ASEE Summer Faculty Fellowship, Dr. Apostolos worked for NASA at Jet Propulsion Laboratory/Caltech as a research scientist in the area of space telerobotics. At the Annenberg Center for Communication, Dr. Apostolos conducted research on facial expressions and human-computer interactions. Her work in this area continues in collaboration with the USC neuroscience program. She was named a Southern California Studies Center faculty fellow, where she is surveying theatre and dance in the Southern California region. Currently, Stanford University is providing funding for Dr. Apostolos’ work in the area of “Dance for Sports” in collaboration with the Stanford Athletic Department. This work was presented to the International Olympic Committee in Sydney, Australia, for the 2000 Games and in preparation for the Salt Lake City Games in 2002. Dr. Apostolos developed the Dance Minor program in Theatre, directs the dance concert each semester, and coordinates the Open Gate Dance Program. In 2004, she presented at the Athens IOC meeting on Dance for Sports and at the World Congress of Dance. This past spring, she was instrumental in bringing internationally-renown director/choreographer Mark Morris to campus for a workshop that integrated motion capture technology and robotics with modern dance. Most recently, she co-founded the Cedars-Sinai/USC Dance Medicine Center, a specialized treatment center – the first of its kind in Los Angeles – offering preventive care and treatment specifically designed for professional and recreational dancers.
Maks Naporowski Bio
I think Maks' career is really interesting because he's done character animation, character setup, and also has experience with keyframing over mocap data. He's done a lot of high profile work at Sony, and as a result, I think he's in a really good position to comment on both the technical and artistic sides of 3D animation. Also, if you didn't already know, he teaches both a rigging and a 3D animation course at DADA. He's currently working as a character animator on Beowulf.
Below is a bio that I got from the DADA website:
Born in Kielce, Poland in the seventies, Maksymilian Naporowski studied art, philosophy, and psychology at McMaster University in Canada. In spring of 1996, he graduated with a B.A. in Philosophy, after which he moved on to Sheridan College in Toronto for a summer art program. The following fall semester, he switched to Seneca College, and studied digital animation, focusing on Softimage and Alias/Wavefront's "Power Animator" software. In 1997, he received a teaching position for the same program, teaching digital animation and character setup for two semesters. In 1998, he took a demonstration artist position for a Toronto based company called "Puppet Works".
There with Puppet Works he developed character demonstration content, presentations for visual effects oriented shows such as Siggraph and NAB, and provided training, consultation, and development services for clients. He provided character animation development and training using the PuppetWorks digital input devices for a variety of high profile companies and shows. This included training, character animation, and motion capture tests for the "Duke Nuke'em" title at 3D Realms in Dallas as well as internal 3d character content for Lockheed Martin in Florida. He also trained staff, did character rigging, and created animation tests for "Merlin" and "Lost in Space" at the Jim Henson Creature Shop in the UK, as well as for "Atlantis" at Walt Disney Studios in California. His character animation tests on the "Incredible Mr. Limpet" at Pacific Title/Mirage were with actor-comedian Jim Carrey and on the 1998 spoof comedy "Jane Austen's Mafia!" he produced character rigging, animation, and structured the pipeline for Swansons Production.
In 1999 he moved to Los Angeles to work with incredible artist and director Mark Swanson on a CG feature called "Fish Tank". Within six months they developed the characters in 3d, built a solid character animation/motion capture pipeline, and provided over three minutes of animation tests for the feel and look of the film. When the funding fell through, he moved on to a job at Centropolis FX where he helped develop the motion capture/animation pipeline for "Patriot" and choreographed/animated some of the large battle sequences. After "Patriot" he spent a few months working as a 3d animation consultant for 2G Productions/Elektrashock where he rigged and animated content for a video feature called "You're mine", a music video for No Prisoners called "Fiction, dreams in digital" by Orgy, and a feature film named "Vertical Limit".
In 2000, he took on a cg artist position with Sony Pictures Imageworks and started on "Harry Potter and the Sorcerer's Stone" as a technical animator, where a majority of his time was spent on the musculature of "Fluffy", the 3-headed dog, the troll, the Centaur, the digital humans- "Harry" and the kids for the "Quidditch" match. For the sequel, he provided both the animation pipeline and support for the animation team. He has been working for Imageworks for over five years and his duties include character setup, pipelines, support, modeling, and character animation. He has taken a lead role on a number of shows and has focussed on character animation in the recent past. His credits at Sony Pictures Imageworks include
Ghost Rider
Superman Returns
Zathura
Bewitched
Aviator
Cursed
Polar Express
Matrix Revolutions
Haunted Mansion
Bad Boys 2
Charlie's Angels 2
The Chubbchubbs ! - Oscar Winner for best Animated Short Film
Stuart Little 2 - VES Award for best character animation in an Animated Motion Picture
Harry Potter and the Sorcerer's Stone
Evolution
In addition to the production work, in recent years he has been teaching character rigging and animation in Maya at the University Of Southern California's School of Cinematic Arts.
Below is a bio that I got from the DADA website:
Born in Kielce, Poland in the seventies, Maksymilian Naporowski studied art, philosophy, and psychology at McMaster University in Canada. In spring of 1996, he graduated with a B.A. in Philosophy, after which he moved on to Sheridan College in Toronto for a summer art program. The following fall semester, he switched to Seneca College, and studied digital animation, focusing on Softimage and Alias/Wavefront's "Power Animator" software. In 1997, he received a teaching position for the same program, teaching digital animation and character setup for two semesters. In 1998, he took a demonstration artist position for a Toronto based company called "Puppet Works".
There with Puppet Works he developed character demonstration content, presentations for visual effects oriented shows such as Siggraph and NAB, and provided training, consultation, and development services for clients. He provided character animation development and training using the PuppetWorks digital input devices for a variety of high profile companies and shows. This included training, character animation, and motion capture tests for the "Duke Nuke'em" title at 3D Realms in Dallas as well as internal 3d character content for Lockheed Martin in Florida. He also trained staff, did character rigging, and created animation tests for "Merlin" and "Lost in Space" at the Jim Henson Creature Shop in the UK, as well as for "Atlantis" at Walt Disney Studios in California. His character animation tests on the "Incredible Mr. Limpet" at Pacific Title/Mirage were with actor-comedian Jim Carrey and on the 1998 spoof comedy "Jane Austen's Mafia!" he produced character rigging, animation, and structured the pipeline for Swansons Production.
In 1999 he moved to Los Angeles to work with incredible artist and director Mark Swanson on a CG feature called "Fish Tank". Within six months they developed the characters in 3d, built a solid character animation/motion capture pipeline, and provided over three minutes of animation tests for the feel and look of the film. When the funding fell through, he moved on to a job at Centropolis FX where he helped develop the motion capture/animation pipeline for "Patriot" and choreographed/animated some of the large battle sequences. After "Patriot" he spent a few months working as a 3d animation consultant for 2G Productions/Elektrashock where he rigged and animated content for a video feature called "You're mine", a music video for No Prisoners called "Fiction, dreams in digital" by Orgy, and a feature film named "Vertical Limit".
In 2000, he took on a cg artist position with Sony Pictures Imageworks and started on "Harry Potter and the Sorcerer's Stone" as a technical animator, where a majority of his time was spent on the musculature of "Fluffy", the 3-headed dog, the troll, the Centaur, the digital humans- "Harry" and the kids for the "Quidditch" match. For the sequel, he provided both the animation pipeline and support for the animation team. He has been working for Imageworks for over five years and his duties include character setup, pipelines, support, modeling, and character animation. He has taken a lead role on a number of shows and has focussed on character animation in the recent past. His credits at Sony Pictures Imageworks include
Ghost Rider
Superman Returns
Zathura
Bewitched
Aviator
Cursed
Polar Express
Matrix Revolutions
Haunted Mansion
Bad Boys 2
Charlie's Angels 2
The Chubbchubbs ! - Oscar Winner for best Animated Short Film
Stuart Little 2 - VES Award for best character animation in an Animated Motion Picture
Harry Potter and the Sorcerer's Stone
Evolution
In addition to the production work, in recent years he has been teaching character rigging and animation in Maya at the University Of Southern California's School of Cinematic Arts.
Sunday, September 23, 2007
Interesting essay on Motion Capture
Although we're not looking at mocap in detail yet, we should do so at some point in the semester. I found this really good survey essay on mocap that talks about the tension between mocap and other forms of animation.
http://www.animationjournal.com/abstracts/mocap.html
The article is written by Maureen Furniss, who teaches at USC, so she'd be a good resource for any questions:
I liked this passage because it links animation and dance, and we're looking at both those areas too:
"From this perspective, we might see the animator's work as a form of visual 'notation'. That is how Lisa Marie Naugle describes motion capture in terms of her dance performance work. Notation, as I use the term here, generally refers to a recording of movement in print form, so that it might be preserved, studied, and perhaps re-enacted at some future time. Ethnographic researchers can use notation, for example, to record ceremonial dances that are on the verge of 'extinction' because the people who perform it are becoming integrated into another culture. One of the best known forms of dance notation is called the Laban dance notation system (actually a software program called 'LabanWriter' can be integrated into the motion capture process). Naugle compares Laban and motion capture, as two forms of notation, with the use of video and film recording. She explains that the benefits of using motion capture over other sorts of notation are that it allows analysis from any point of view and that it can be visualized in 3D form. She explains, "Looking at dance images from different locations and perspectives, notators, choreographers and dancers can create annotations or list notes about the work. . . . While video may be used repeatedly to extract information about color, motion, and, to a limited extent, depth, it is often lacking in detail or definition. Even if the video has been edited from several different perspectives, the medium does not allow for a full exploration of movement in three dimensions."24
http://www.animationjournal.com/abstracts/mocap.html
The article is written by Maureen Furniss, who teaches at USC, so she'd be a good resource for any questions:
I liked this passage because it links animation and dance, and we're looking at both those areas too:
"From this perspective, we might see the animator's work as a form of visual 'notation'. That is how Lisa Marie Naugle describes motion capture in terms of her dance performance work. Notation, as I use the term here, generally refers to a recording of movement in print form, so that it might be preserved, studied, and perhaps re-enacted at some future time. Ethnographic researchers can use notation, for example, to record ceremonial dances that are on the verge of 'extinction' because the people who perform it are becoming integrated into another culture. One of the best known forms of dance notation is called the Laban dance notation system (actually a software program called 'LabanWriter' can be integrated into the motion capture process). Naugle compares Laban and motion capture, as two forms of notation, with the use of video and film recording. She explains that the benefits of using motion capture over other sorts of notation are that it allows analysis from any point of view and that it can be visualized in 3D form. She explains, "Looking at dance images from different locations and perspectives, notators, choreographers and dancers can create annotations or list notes about the work. . . . While video may be used repeatedly to extract information about color, motion, and, to a limited extent, depth, it is often lacking in detail or definition. Even if the video has been edited from several different perspectives, the medium does not allow for a full exploration of movement in three dimensions."24
Subscribe to:
Posts (Atom)