Home // ACHI 2020, The Thirteenth International Conference on Advances in Computer-Human Interactions // View article
Integrating Human Body MoCaps into Blender using RGB Images
Authors:
Jordi Sanchez-Riera
Francesc Moreno-Noguer
Keywords: MoCap; 2D, 3D human pose estimation; Synthetic human model; Action mimic.
Abstract:
Reducing the complexity and cost of a Motion Capture (MoCap) system has been of great interest in recent years. Unlike other systems that use depth range cameras, we present an algo- rithm that is capable of working as a MoCap system with a single Red-Green-Blue (RGB) camera, and it is completely integrated in an off-the-shelf rendering software. This makes our system easily deployable in outdoor and unconstrained scenarios. Our approach builds upon three main modules. First, given solely one input RGB image, we estimate 2D body pose; the second module estimates the 3D human pose from the previously calculated 2D coordinates, and the last module calculates the necessary rotations of the joints given the goal 3D point coordinates and the 3D virtual human model. We quantitatively evaluate the first two modules using synthetic images, and provide qualitative results of the overall system with real images recorded from a webcam.
Pages: 285 to 290
Copyright: Copyright (c) IARIA, 2020
Publication date: March 22, 2020
Published in: conference
ISSN: 2308-4138
ISBN: 978-1-61208-761-0
Location: Valencia, Spain
Dates: from November 21, 2020 to November 25, 2020