Reconstructing 3D Human Pose by Watching Humans in the Mirror

CVPR 2021(Oral)

Qi Fang*, Qing Shuai*, Junting Dong, Hujun Bao, Xiaowei Zhou

State Key Lab of CAD & CG, Zhejiang University   
* Equal contribution

Abstract


Given an Internet video, our method can generate acurate 3D positions and poses, which can be used to control the character.

In this paper, we introduce the new task of reconstructing 3D human pose from a single image in which we can see the person and the person's image through a mirror. Compared to general scenarios of 3D pose estimation from a single view, the mirror reflection provides an additional view for resolving the depth ambiguity. We develop an optimization-based approach that exploits mirror symmetry constraints for accurate 3D pose reconstruction. We also provide a method to estimate the surface normal of the mirror from vanishing points in the single image. To validate the proposed approach, we collect a large-scale dataset named Mirrored-Human, which covers a large variety of human subjects, poses and backgrounds. The experiments demonstrate that, when trained on Mirrored-Human with our reconstructed 3D poses as pseudo ground-truth, the accuracy and generalizability of existing single-view 3D pose estimators can be largely improved.


Overview video









Collected Internet Dataset



Collected Evaluation Dataset