Skip to content

[ICLR 2025 Spotlight] Official implementation for "DynamicCity: Large-Scale Occupancy Generation from Dynamic Scenes"

Notifications You must be signed in to change notification settings

3DTopia/DynamicCity

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DynamicCity: Large-Scale Occypancy Generation from Dynamic Scenes

Hengwei Bian1,2,*    Lingdong Kong1,3    Haozhe Xie4    Liang Pan1,†,‡ Yu Qiao1    Ziwei Liu4
1Shanghai AI Laboratory    2Carnegie Mellon University   
3National University of Singapore    4S-Lab, Nanyang Technological University
*Work done during an internship at Shanghai AI Laboratory    Corresponding author    Project lead

Teaser

LiDAR scene generation has been developing rapidly recently. However, existing methods primarily focus on generating static and single-frame scenes, overlooking the inherently dynamic nature of real-world driving environments. In this work, we introduce DynamicCity, a novel 4D occupancy generation framework capable of generating large-scale, high-quality dynamic LiDAR scenes with semantics. DynamicCity mainly consists of two key models: 1. A VAE model for learning HexPlane as the compact 4D representation. Instead of using naive averaging operations, DynamicCity employs a novel Projection Module to effectively compress 4D LiDAR features into six 2D feature maps for HexPlane construction, which significantly enhances HexPlane fitting quality (up to 12.56 mIoU gain). Furthermore, we utilize an Expansion & Squeeze Strategy to reconstruct 3D feature volumes in parallel, which improves both network training efficiency and reconstruction accuracy than naively querying each 3D point (up to **7.05 ** mIoU gain, 2.06x training speedup, and 70.84% memory reduction). 2. A DiT-based diffusion model for HexPlane generation. To make HexPlane feasible for DiT generation, a Padded Rollout Operation is proposed to reorganize all six feature planes of the HexPlane as a squared 2D feature map. In particular, various conditions could be introduced in the diffusion or sampling process, supporting versatile 4D generation applications, such as trajectory- and command-driven generation, inpainting, and layout-conditioned generation. Extensive experiments on the CarlaSC and Waymo datasets demonstrate that DynamicCity significantly outperforms existing state-of-the-art 4D LiDAR generation methods across multiple metrics. The code will be released to facilitate future research.

Overview

Overview

Our DynamicCity framework consists of two key procedures: (a) Encoding HexPlane with an VAE architecture, and * (b)* 4D Scene Generation with HexPlane DiT.

Updates

  • [February 2025]: Code released.
  • [October 2024]: Project page released.

Outline

⚙️ Installation

conda create -n dyncity python=3.10 -y
conda activate dyncity
conda install pytorch==2.4.0 pytorch-cuda=11.8 -c pytorch -c nvidia
conda install einops hydra-core matplotlib numpy omegaconf timm tqdm wandb -c conda-forge -y
pip install flash-attn --no-build-isolation

♨️ Data Preparation

Download the CarlaSC dataset from here and extract it into the ./carlasc directory. Your repository should look like this:

DynamicCity
├── carlasc/
│   ├── Cartesian/
│   │   ├── Train/
│   │   │   ├── Town01_Heavy
│   │   │   ├── ...
│   │   ├── Test/
├── ...

🚀 Getting Started

You can obtain our checkpoints from here. To use the pretrained models, simply unzip ckpts.zip as ./ckpts and run infer_dit.py.

To train VAE on CarlaSC dataset, run the following command:

torchrun --nproc-per-node 8 train.py VAE carlasc name=DYNAMIC_CITY_VAE

After VAE is trained, save hexplane rollouts using:

torchrun --nproc-per-node 8 infer_vae.py -n DYNAMIC_CITY_VAE --save_rollout --best

Then, you can train your DiT using this command:

torchrun --nproc-per-node 8 train.py DiT carlasc name=DYNAMIC_CITY_DIT vae_name=DYNAMIC_CITY_VAE

Finally, use DiT to sample novel city scenes:

torchrun --nproc-per-node 8 infer_dit.py -d DYNAMIC_CITY_DIT --best_vae

🏙️ Dynamic Scene Generation

Unconditional Generation

Unconditional Generation 1 Unconditional Generation 2 Unconditional Generation 3

HexPlane Conditional Generation

Unconditional Generation 1 Unconditional Generation 2 Unconditional Generation 3

Command & Trajectory-Driven Generation

Unconditional Generation 1 Unconditional Generation 2 Unconditional Generation 3

Layout-Conditioned Generation

Unconditional Generation 1 Unconditional Generation 2

Dynamic Scene Inpainting

Unconditional Generation 1 Unconditional Generation 2

Citation

If you find this work helpful for your research, please kindly consider citing our papers:

@inproceedings{bian2025dynamiccity,
  title={DynamicCity: Large-Scale Occupancy Generation from Dynamic Scenes},
  author={Bian, Hengwei and Kong, Lingdong and Xie, Haozhe and Pan, Liang and Qiao, Yu and Liu, Ziwei},
  booktitle={Proceedings of the International Conference on Learning Representations (ICLR)},
  year={2025},
}

About

[ICLR 2025 Spotlight] Official implementation for "DynamicCity: Large-Scale Occupancy Generation from Dynamic Scenes"

Topics

Resources

Stars

Watchers

Forks

Languages