Efficient Neural Radiance Fields with Learned Depth-Guided Sampling


Haotong Lin*, Sida Peng*, Zhen Xu, Hujun Bao, Xiaowei Zhou

State Key Lab of CAD & CG, Zhejiang University
* denotes equal contribution

Abstract


ENeRF can synthesize novel views of dynamic scenes in real-time.

This paper aims to reduce the rendering time of neural radiance fields (NeRF). Some recent works equip NeRF with image encoders and are able to generalize across scenes, which avoids the per-scene optimization. However, their rendering process is generally very slow. A major factor is that they sample lots of points in empty space when inferring radiance fields. In this paper, we present a hybrid scene representation which combines the best of implicit radiance fields and explicit depth maps for efficient rendering. Specifically, we first build the cascade cost volume to efficiently predict the coarse geometry of the scene. The coarse geometry allows us to sample few points near the scene surface and significantly improves the rendering speed. This process is fully differentiable, enabling us to jointly learn the depth prediction and radiance field networks from only RGB images. Experiments show that the proposed approach exhibits state-of-the-art performance on the DTU, Real Forward-facing and NeRF Synthetic datasets, while being at least 50 times faster than previous generalizable radiance field methods. We also demonstrate the capability of our method to synthesize free-viewpoint videos of dynamic human performers in real-time. The code will be released for reproducibility.


Overview video



Interactive free-viewpoint video demo on the ZJU-MoCap and DynamicCap datasets



Comparison with state-of-the-art methods under the generalization setting



Comparison with state-of-the-art methods under the per-scene optimization setting



Ablation studies on the main proposed components



Citation


@inproceedings{lin2021efficient,
  title={Efficient Neural Radiance Fields with Learned Depth-Guided Sampling},
  author={Lin, Haotong and Peng, Sida and Xu, Zhen and Bao, Hujun and Zhou, Xiaowei},
  booktitle={arXiv},
  year={2021}
}