NIS-SLAM

Neural Implicit Semantic RGB-D SLAM for 3D Consistent Scene Understanding

TVCG 2024 (ISMAR Journal Track)

State Key Lab of CAD & CG, Zhejiang University
NIS-SLAM architecture.

NIS-SLAM, a neural implicit semantic RGB-D SLAM system that incrementally reconstructs the environment with 3D consistent scene understanding. As shown in the figure, taking continuous RGB-D frames and 2D noise segmentation results as input, our system can reconstruct high-fidelity surface and geometry, learn 3D consistent semantic field and recover the objects in the scene.

Abstract

In recent years, the paradigm of neural implicit representations has gained substantial attention in the field of Simultaneous Localization and Mapping (SLAM). However, a notable gap exists in the existing approaches when it comes to scene understanding. In this paper, we introduce NIS-SLAM, an efficient neural semantic RGB-D SLAM system, that leverages a pre-trained 2D segmentation network to learn consistent semantic representations. Specifically, we use high-frequency multi-resolution tetrahedron-based features and low-frequency positional encoding to perform scene reconstruction and understanding. The combination ensures both memory efficiency and spatial consistency. Besides, to address the inconsistency of 2D segmentation results from multiple views, we propose a fusion strategy that integrates the semantic probabilities from previous non-keyframes into keyframes to achieve consistent semantic learning. Furthermore, we implement a confidence-based pixel sampling and progressive optimization weight function for robust camera tracking. Extensive experimental results on various datasets show the better or more competitive performance of our system when compared to other existing neural dense implicit RGB-D SLAM approaches. Besides, we also show that our approach can be used in augmented reality applications.


Framework Overview

NIS-SLAM Overview.

Our system takes RGB-D frames as input to perform camera tracking and mapping via volume rendering and models 3D semantics with the noise 2D segmentation results from Mask2Former. Based on the hybrid implicit representation of multi-resolution tetrahedron feature $\theta$ and positional encoding $\texttt{PE}(p)$, we decode the SDF $\sigma$, latent feature $h$, color $c$, and semantic probability $s$ with three MLPs {$\mathcal{M}_{geo}$, $\mathcal{M}_{color}$, and $\mathcal{M}_{sem}$}. To model consistent semantic property, we fuse multi-view semantics of nearby non-keyframes for learning 3D consistent representation.


Dense Reconstruction

Reconstruction Results on ScanNet

Reconstruction Result.

Annotations: Compared to the baselines, our method can reconstruct more accurate detailed geometry and generate more complete, smoother mesh.

Object Reconstruction of Replica

Reconstruction Result.

Annotations: We show some selected objects for comparison with vMAP.


Semantic Segmentation

semantic segmentation.

Annotations:Semantic Segmentation Results on Replica. We show the multi-view segmentation results of different approaches. The top, middle, and bottom parts show the segmentation results of Mask2Former, our approach without semantic fusion, and our approach with semantic fusion respectively. Comparing the segmentation results from different views, we can see that our method can learn more consistent semantic representation.




BibTeX

@ARTICLE{nis_slam,
      author={Zhai, Hongjia and Huang, Gan and Hu, Qirui and Li, Guanglin and Bao, Hujun and Zhang, Guofeng},
      journal={IEEE Transactions on Visualization and Computer Graphics}, 
      title={NIS-SLAM: Neural Implicit Semantic RGB-D SLAM for 3D Consistent Scene Understanding}, 
      year={2024},
      pages={1-11}
}