In this video we are going to show how to create basic home made facial MoCap videos with 3 different softwares/methods, which are:
- AFTER EFFECTS
- Microsoft KINECT (through FACESHIFT)
A brief introduction to the basic steps, an overview of the result that you can achieve and an analysis of the difference between these softwares.
Facial Motion Capture is a process to electronically convert the movements of a person’s face into a digital 2D/3D movement of a humanoid, using cameras or laser scanners. This method can be used to produce CG (computer graphics) computer animation for movies, games, or real-time avatars. Because the motion of CG characters is derived from the movements of real people, it results in more realistic and nuanced computer character animation than if the animation were created manually.
Blender and AE are two MARKER-BASED methods, that require to put marker on the face of the actor.
Traditional marker based systems apply up to 350 markers to the actors face and track the marker movement with high resolution cameras. The system can use active or passive markers.
Active LED Marker technology is currently being used to drive facial animation in real-time to provide user feedback.
Passive markers (which are used in the video) are simple little circle stickers.
Unfortunately the marker-based system is a problem for the actors expressions and sometimes it leads to a negative performance.
The MARKER-LESS technologies use the features of the face such as the corners of the lips and eyes, and wrinkles and then track them. These vision based approaches also have the ability to track pupil movement, eyelids, teeth occlusion by the lips and tongue, which are obvious problems in most computer animated features.
See also our previous short documentary video about
Working Group: INSIDE OUT
Authors: POLLINA Flavia Beatrice, ROLLO Manoj, SOLLI Elena