1Zhejiang University 2Stanford University 3HKUST
*Equal Contribution xCorresponding Author
This paper aims to address the challenge of reconstructing long volumetric videos from multi-view RGB videos. Recent dynamic view synthesis methods leverage powerful 4D representations, like feature grids or point cloud sequences, to achieve high-quality rendering results. However, they are typically limited to short (1~2s) video clips and often suffer from large memory footprints when dealing with longer videos. To solve this issue, we propose a novel 4D representation, named Temporal Gaussian Hierarchy, to compactly model long volumetric videos. Our key observation is that there are generally various degrees of temporal redundancy in dynamic scenes, which consist of areas changing at different speeds. Extensive experimental results demonstrate the superiority of our method over alternative methods in terms of training cost, rendering speed, and storage usage. To our knowledge, this work is the first approach capable of efficiently handling minutes of volumetric video data while maintaining state-of-the-art rendering quality.
Given a long multi-view video sequence, our method can generate a compact volumetric video with minimal training and memory usage while maintaining real-time rendering with state-of-the-art quality.
@Article{xu2024longvolcap,
author = {Xu, Zhen and Xu, Yinghao and Yu, Zhiyuan and Peng, Sida and Sun, Jiaming and Bao, Hujun and Zhou, Xiaowei},
title = {Representing Long Volumetric Video with Temporal Gaussian Hierarchy},
journal = {ACM Transactions on Graphics},
number = {6},
volume = {43},
month = {November},
year = {2024},
url = {https://zju3dv.github.io/longvolcap}
}