Official Implementation of A Vision-Centric Approach for Static Map Element Annotation
CAMA: Arxiv | Youtube | Bilibili
CAMAv2: Arxiv | Youtube | Bilibili
CAMA: Consistent and Accurate Map Annotation, nuScenes example:
Please run git checkout camav2 to switch to camav2 branch.
- Release the evaluation scripts (SRE, precision, recall, F1-score).
- Add LiDAR aggregation demo using CAMAv2 reconstructed poses.
- camav2_label.zip [Google Drive]
- CAMAv2 aggregates scenes with intersecting portions into one large scene called a site.
- It solves the shortcoming of dropping head and tail frames in camav1.
- cama_label.zip [Google Drive]
- Upload nuScenes 73 scenes from v1.0-test with CAMA labels.
- Add reprojection demo for both CAMA and nuScenes origin labels.
- Note: if using this older version, change
self.map_widthandself.map_heightto 300 in reproject.py
-
install required python packages
python3 -m pip install -r requirements.txt
-
Download camav2_label.zip [Google Drive]
-
Modify config.yaml accordingly:
- dataroot: path to the origin nuScenes dataset
- converted_dataroot: output converted dataset dir
- cama_label_file: path to cama_label.zip you just download from 2
- output_video_dir: where the demo video writes
-
Run the pipeline
python3 main.py --config config.yaml
If you benefit from this work, please cite the mentioned and our paper:
@inproceedings{zhang2021deep,
title={A Vision-Centric Approach for Static Map Element Annotation},
author={Zhang, Jiaxin and Chen, Shiyuan and Yin, Haoran and Mei, Ruohong and Liu, Xuan and Yang, Cong and Zhang, Qian and Sui, Wei},
booktitle={IEEE International Conference on Robotics and Automation (ICRA 2024)},
pages={1-7}
}
@article{chen2024camav2,
title={CAMAv2: A Vision-Centric Approach for Static Map Element Annotation},
author={Chen, Shiyuan and Zhang, Jiaxin and Mei, Ruohong and Cai, Yingfeng and Yin, Haoran and Chen, Tao and Sui, Wei and Yang, Cong},
journal={arXiv preprint arXiv:2407.21331},
year={2024}
}

