Zehan Zheng, Fan Lu, Weiyi Xue, Guang Chen†, Changjun Jiang († Corresponding author)
CVPR 2024
Paper (arXiv) | Paper (CVPR) | Project Page | Video | Poster | Slides
This repository is the official PyTorch implementation for LiDAR4D.
![](https://private-user-images.githubusercontent.com/51731102/319222348-e23640bf-bd92-4ee0-88b4-375faf8c9b4d.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NDY4NjEsIm5iZiI6MTczOTQ0NjU2MSwicGF0aCI6Ii81MTczMTEwMi8zMTkyMjIzNDgtZTIzNjQwYmYtYmQ5Mi00ZWUwLTg4YjQtMzc1ZmFmOGM5YjRkLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEzVDExMzYwMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWIzMWIxZDExYzkzYjRlYzliZWY1MTM3YzU1OWY3ZDFmMDVhZjljNzU1MjMwNjYzZTRjZjg0YTkyMzM0NDA2ODUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.pbdYEMm80LHG7IblQAmkkK5PQulHiVN_BklTQ7JV7UA)
Table of Contents
2024-6-1:🕹️ We release the simulator for easier rendering and manipulation. Happy Children's Day and Have Fun!
2024-5-4:📈 We update flow fields and improve temporal interpolation.
2024-4-13:📈 We update U-Net of LiDAR4D for better ray-drop refinement.
2024-4-5:🚀 Code of LiDAR4D is released.
2024-4-4:🔥 You can reach the preprint paper on arXiv as well as the project page.
2024-2-27:🎉 Our paper is accepted by CVPR 2024.
LiDAR4D_demo.mp4
![](https://private-user-images.githubusercontent.com/51731102/320004665-42083b63-2459-4eb9-bb8f-651eca0a1148.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NDY4NjEsIm5iZiI6MTczOTQ0NjU2MSwicGF0aCI6Ii81MTczMTEwMi8zMjAwMDQ2NjUtNDIwODNiNjMtMjQ1OS00ZWI5LWJiOGYtNjUxZWNhMGExMTQ4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEzVDExMzYwMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWEwZjFiYzBkZWM5OGM4NGJjMGI2NzRlOTU2MTY0OWQ3MWZiNDgwYjgyNGI5MWY3NDdjYzU2M2U1NWIzZTQ5OTgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.0v5KIP8Rhka0C5gMgIaOb768-14bV9qHhqaD2Tveg60)
LiDAR4D is a differentiable LiDAR-only framework for novel space-time LiDAR view synthesis, which reconstructs dynamic driving scenarios and generates realistic LiDAR point clouds end-to-end. It adopts 4D hybrid neural representations and motion priors derived from point clouds for geometry-aware and time-consistent large-scale scene reconstruction.
git clone https://github.com/ispc-lab/LiDAR4D.git
cd LiDAR4D
conda create -n lidar4d python=3.9
conda activate lidar4d
# PyTorch
# CUDA 12.1
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
# CUDA 11.8
# pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
# CUDA <= 11.7
# pip install torch==2.0.0 torchvision torchaudio
# Dependencies
pip install -r requirements.txt
# Local compile for tiny-cuda-nn
git clone --recursive https://github.com/nvlabs/tiny-cuda-nn
cd tiny-cuda-nn/bindings/torch
python setup.py install
# compile packages in utils
cd utils/chamfer3D
python setup.py install
KITTI-360 dataset (Download)
We use sequence00 (2013_05_28_drive_0000_sync
) for experiments in our paper.
![](https://private-user-images.githubusercontent.com/51731102/320007853-c9f5d5c5-ac48-4d54-8109-9a8b745bbca0.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NDY4NjEsIm5iZiI6MTczOTQ0NjU2MSwicGF0aCI6Ii81MTczMTEwMi8zMjAwMDc4NTMtYzlmNWQ1YzUtYWM0OC00ZDU0LTgxMDktOWE4Yjc0NWJiY2EwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEzVDExMzYwMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTVjZjMwOWNhYmFkMjIwMWE1NWVlMTVhMGRhYmM2N2I0N2E4YWNlMzViYjU2NmZkMmMxMmViNTJiNjVmY2Q0NmUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.VMYO3YfUKwmOi0AqTyP4PiLpHkG-0ke0n2uVprI1FGc)
Download KITTI-360 dataset (2D images are not needed) and put them into data/kitti360
.
(or use symlinks: ln -s DATA_ROOT/KITTI-360 ./data/kitti360/
).
The folder tree is as follows:
data
└── kitti360
└── KITTI-360
├── calibration
├── data_3d_raw
└── data_poses
Next, run KITTI-360 dataset preprocessing: (set DATASET
and SEQ_ID
)
bash preprocess_data.sh
After preprocessing, your folder structure should look like this:
configs
├── kitti360_{sequence_id}.txt
data
└── kitti360
├── KITTI-360
│ ├── calibration
│ ├── data_3d_raw
│ └── data_poses
├── train
├── transforms_{sequence_id}test.json
├── transforms_{sequence_id}train.json
└── transforms_{sequence_id}val.json
Set corresponding sequence config path in --config
and you can modify logging file path in --workspace
. Remember to set available GPU ID in CUDA_VISIBLE_DEVICES
.
Run the following command:
# KITTI-360
bash run_kitti_lidar4d.sh
KITTI-360 Dynamic Dataset (Sequences: 2350
4950
8120
10200
10750
11400
)
Method | Point Cloud | Depth | Intensity | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
CD↓ | F-Score↑ | RMSE↓ | MedAE↓ | LPIPS↓ | SSIM↑ | PSNR↑ | RMSE↓ | MedAE↓ | LPIPS↓ | SSIM↑ | PSNR↑ | |
LiDAR-NeRF | 0.1438 | 0.9091 | 4.1753 | 0.0566 | 0.2797 | 0.6568 | 25.9878 | 0.1404 | 0.0443 | 0.3135 | 0.3831 | 17.1549 |
LiDAR4D (Ours) † | 0.1002 | 0.9320 | 3.0589 | 0.0280 | 0.0689 | 0.8770 | 28.7477 | 0.0995 | 0.0262 | 0.1498 | 0.6561 | 20.0884 |
KITTI-360 Static Dataset (Sequences: 1538
1728
1908
3353
)
Method | Point Cloud | Depth | Intensity | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
CD↓ | F-Score↑ | RMSE↓ | MedAE↓ | LPIPS↓ | SSIM↑ | PSNR↑ | RMSE↓ | MedAE↓ | LPIPS↓ | SSIM↑ | PSNR↑ | |
LiDAR-NeRF | 0.0923 | 0.9226 | 3.6801 | 0.0667 | 0.3523 | 0.6043 | 26.7663 | 0.1557 | 0.0549 | 0.4212 | 0.2768 | 16.1683 |
LiDAR4D (Ours) † | 0.0834 | 0.9312 | 2.7413 | 0.0367 | 0.0995 | 0.8484 | 29.3359 | 0.1116 | 0.0335 | 0.1799 | 0.6120 | 19.0619 |
†: The latest results better than the paper.
Experiments are conducted on the NVIDIA 4090 GPU. Results may be subject to some variation and randomness.
![](https://private-user-images.githubusercontent.com/51731102/335195437-ada49a62-8b53-47fe-8cc0-4d99af1ebad8.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NDY4NjEsIm5iZiI6MTczOTQ0NjU2MSwicGF0aCI6Ii81MTczMTEwMi8zMzUxOTU0MzctYWRhNDlhNjItOGI1My00N2ZlLThjYzAtNGQ5OWFmMWViYWQ4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEzVDExMzYwMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTRkOGMwNjgwNTkyZDJiN2IxODA0ODRiZmNmMTA4NTRiNzllMzIyOTQ4NzA4MTQ2YzdiZWU5MDk5ZWRlN2JkZmYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.3SVY4ufrXLX1SY9z7y81-aRM-3wXff7Lk0qcfx7lwgc)
After reconstruction, you can use the simulator to render and manipulate LiDAR point clouds in the whole scenario. It supports dynamic scene re-play, novel LiDAR configurations (--fov_lidar
, --H_lidar
, --W_lidar
) and novel trajectory (--shift_x
, --shift_y
, --shift_z
).
We also provide a simple demo setting to transform LiDAR configurations from KITTI-360 to NuScenes, using --kitti2nus
in the bash script.
Check the sequence config and corresponding workspace and model path (--ckpt
).
Run the following command:
bash run_kitti_lidar4d_sim.sh
The results will be saved in the workspace folder.
We sincerely appreciate the great contribution of the following works:
If you find our repo or paper helpful, feel free to support us with a star 🌟 or use the following citation:
@inproceedings{zheng2024lidar4d,
title = {LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis},
author = {Zheng, Zehan and Lu, Fan and Xue, Weiyi and Chen, Guang and Jiang, Changjun},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2024}
}
All code within this repository is under Apache License 2.0.