Due to the ability to synthesize high-quality novel views, Neural Radiance Fields (NeRF) has been recently exploited to improve visual localization in a known environment. However, the existing methods mostly utilize NeRF for data augmentation to improve the regression model training, and their performances on novel viewpoints and appearances are still limited due to the lack of geometric constraints. In this paper, we propose a novel visual localization framework, i.e., PNeRFLoc, based on a unified point-based representation. On one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points as traditional structure-based methods; on the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization. Specifically, we propose a novel feature adaption module to close the gaps between the features for visual localization and neural rendering. To improve the efficacy and efficiency of neural rendering-based optimization, we also develop an efficient rendering-based framework with a warping loss function. Extensive experiments demonstrate that PNeRFLoc performs the best on the synthetic dataset when the 3D NeRF model can be well learned, and significantly outperforms all the NeRFboosted localization methods with on-par SOTA performance on the real-world benchmark localization datasets.
Framework Overview
Visual localization with PNeRFLoc. In the proposed framework, we associate raw point clouds with scene-agnostic localization features and train a scene-specific feature adaptation together with the point-based neural radiance fields. Subsequently, PNeRFLoc integrates structure-based localization with novel rendering-based optimization to accurately estimate the 6-DOF camera pose of the query image.
@inproceedings{zhao2024pnerfloc,
title={PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields},
author={Zhao, Boming and Yang, Luwei and Mao, Mao and Bao, Hujun and Cui, Zhaopeng},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
number={7},
pages={7450--7459},
year={2024}
}