Neural 3D Reconstruction in the Wild

SIGGRAPH 2022 (Conference Proceedings)


Jiaming Sun1, Xi Chen2, Qianqian Wang3, Zhengqi Li3, Hadar Averbuch-Elor3, Xiaowei Zhou2, Noah Snavely3

1 Image Derivative Inc.  2 Zhejiang University   3 Cornell Tech & Cornell University  

Abstract


input

We are witnessing an explosion of neural implicit representations in computer vision and graphics. Their applicability has recently expanded beyond tasks such as shape generation and image-based rendering to the fundamental problem of image-based 3D reconstruction. However, existing methods typically assume constrained 3D environments with constant illumination captured by a small set of roughly uniformly distributed cameras. We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections in the presence of varying illumination. To achieve this, we propose a hybrid voxel- and surface-guided sampling technique that allows for more efficient ray sampling around surfaces and leads to significant improvements in reconstruction quality. Further, we present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes. We perform extensive experiments, demonstrating that our approach surpasses both classical and neural reconstruction methods on a wide variety of metrics.


Reconstruction showcase



Zoom in by scrolling. You can select “Matcap” in Model Inspector to inspect the geometry without the baked ambient occlusions.


Comparison with baseline methods



Training speed comparison



The Heritage-Recon Benchmark


To the best of our knowledge, there is no existing dataset pairing Internet photo collections with ground truth 3D geometry. Therefore, we introduce Heritage-Recon, a new benchmark dataset with LiDAR scans as ground truth, derived from Open Heritage 3D. The following GIFs demostrate the alignment quality of the LiDAR scans and the images.


Citation


@inproceedings{sun2022neuconw,
  title={Neural {3D} Reconstruction in the Wild},
  author={Sun, Jiaming and Chen, Xi and Wang, Qianqian and Li, Zhengqi and Averbuch-Elor, Hadar and Zhou, Xiaowei and Snavely, Noah},
  booktitle={SIGGRAPH Conference Proceedings},
  year={2022}
}