PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation

CVPR 2019 (oral)
Sida Peng*,1 Yuan Liu*,1 Qixing Huang2 Xiaowei Zhou†, 1 Hujun Bao†, 1
1. State Key Lab of CAD & CG, Zhejiang University      2. Graphics & AI Lab, University of Texas at Austin
* The first two authors contribute equally         † Corresponding authors

Abstract

This paper addresses the challenge of 6DoF pose estimation from a single RGB image under severe occlusion or truncation. Many recent works have shown that a two-stage approach, which first detects keypoints and then solves a Perspective-n-Point (PnP) problem for pose estimation, achieves remarkable performance. However, most of these methods only localize a set of sparse keypoints by regressing their image coordinates or heatmaps, which are sensitive to occlusion and truncation. Instead, we introduce a Pixel-wise Voting Network (PVNet) to regress pixel-wise unit vectors pointing to the keypoints and use these vectors to vote for keypoint locations using RANSAC. This creates a flexible representation for localizing occluded or truncated keypoints. Another important feature of this representation is that it provides uncertainties of keypoint locations that can be further leveraged by the PnP solver. Experiments show that the proposed approach outperforms the state of the art on the LINEMOD, Occlusion LINEMOD and YCB-Video datasets by a large margin, while being efficient for real-time pose estimation. We further create a Truncation LINEMOD dataset to validate the robustness of our approach against truncation.

A Video for introduction

Deal with occlusion

Visualizations of results on the Occlusion LINEMOD dataset. Green 3D bounding boxes represent the ground truth poses while blue 3D bounding boxes represent our predictions.

Deal with truncation

Visualizations of results on the Occlusion LINEMOD dataset. Green 3D bounding boxes represent the ground truth poses while blue 3D bounding boxes represent our predictions.

Real-time demo