1Zhejiang University 2Geely Automobile Research Institute
*Equal Contribution xCorresponding Author
This paper addresses the challenge of reconstructing dynamic 3D scenes with complex motions. Some recent works define 3D Gaussian primitives in the canonical space and use deformation fields to map canonical primitives to observation spaces, achieving real-time dynamic view synthesis. However, these methods often struggle to handle scenes with complex motions due to the difficulty of optimizing deformation fields. To overcome this problem, we propose FreeTimeGS, a novel 4D representation that allows Gaussian primitives to appear at arbitrary time and locations. In contrast to canonical Gaussian primitives, our representation possesses the strong flexibility, thus improving the ability to model dynamic 3D scenes. In addition, we endow each Gaussian primitive with an motion function, allowing it to move to neighboring regions over time, which reduces the temporal redundancy. Experiments results on several datasets show that the rendering quality of our method outperforms recent methods by a large margin. The code will be released for reproducibility.
We represent a dynamic scene using Gaussian primitives that can appear anytime anywhere. Each Gaussian is assigned with a motion function to to model its movement. And its opacity is modulated by the temporal opacity function which control the impact of the Gaussian primitive over time. With this 4D representation, we further regularize the Gaussians with a 4D regularization loss and optimize the rasterization result with rendering loss for reconstructing a dynamic 3D scene from multi-view videos.
@inproceedings{wang2025freetimegs,
author = {Wang, Yifan and Yang, Peishan and Xu, Zhen and Sun, Jiaming and Zhang, Zhanhua and Chen, Yong and Bao, Hujun and Peng, Sida and Zhou, Xiaowei},
title = {FreeTimeGS: Free Gaussian Primitives at Anytime Anywhere for Dynamic Scene Reconstruction},
booktitle = {CVPR},
year = {2025},
url = {https://zju3dv.github.io/freetimegs}
}