FuseRoad: Enhancing Lane Shape Prediction Through Semantic Knowledge Integration and Cross-Dataset Training
Figure 1: Architecture of FuseRoad.
- 🔥 End-to-end multi task training: Use two dataset simultaneously.
- 🔥 High performance: reach remarkable 97.42% F-1 score on the TuSimple testing set.
- Linux ubuntu 20.04 with python 3.11, pytorch 2.1.2, cudatoolkit 12.2
Create a new conda environment and install the required packages:
conda env create --name FuseRoad python=3.11
conda activate FuseRoad
After creating the environment, install the required packages:
pip imstall openmim
mim install mmcv-full
pip install -r requirements.txt
cd models/py_utils/orn && pip install .
CULane evaluation environment is required to evaluate the performance on CULane dataset. Please refer to CULane to compile the evaluating environment.
Download and extract TuSimple train, val and test with annotations from TuSimple , and download and extract CULane train, val and test with annotations from CULane.
We expect the directory structure to be the following:
TuSimple/
LaneDetection/
clips/
label_data_0313.json
label_data_0531.json
label_data_0601.json
test_label.json
CULane/
driver_23_30frame/
driver_37_30frame/
driver_100_30frame/
driver_161_90frame/
driver_182_30frame/
driver_193_90frame/
list/
test_split/
test.txt
train.txt
train_gt.txt
val.txt
val_gt.txt
Cityscapes/
train/ # imgs_in train set
train_labels/ # labels_in train set
val/ # imgs_in val set
val_labels/ # labels_in val set
Cityscapes_class_dict_19_classes.csv
Download the pretrained weights from Segformer provided OneDrive and put it to models/py_utils/SegFormer/imagenet_pretrained directory. To train a model:
(If you only want to use the train set, please see config file and set "train_split": "train")
(If you don't want to use the SRKE module, please set "use_SRKE": False)
python train.py CONFIG_FILE_NAME --model_name FuseRoad
- Visualized images are in ./results during training.
- Saved model files are in ./cache during training.
To train a model from a snapshot model file:
python train.py CONFIG_FILE_NAME --model_name FuseRoad --iter ITER_NUMS
Download the trained model from GoogleDrive and put it to ./cache directory.
python test.py CONFIG_FILE_NAME --model_name FuseRoad --modality eval --split testing --testiter ITER_NUMS
python test.py FuseRoad_TuSimple --model_name FuseRoad --modality eval --split testing --testiter 800000
python test.py FuseRoad_CULane_b5 --model_name FuseRoad --modality eval --split testing --testiter 800000
then
cd lane_evaluation
sh run.sh # to obtain the overall F1-measure
sh Run.sh # to valid the splitting performance
python test.py FuseRoad_TuSimple --model_name FuseRoad --modality eval --split testing --testiter 800000 --debug
To demo on a set of images(store images in root_dir/images, then the detected results will be saved in root_dir/detections):
python test.py FuseRoad_TuSimple --model_name FuseRoad --modality images --image_root root_dir --debug
Or can use the following command to demo on images in a specific directory and determine the output directory(recommended):
python3 demo.py FuseRoad_TuSimple --model_name FuseRoad --testiter 800000 --image_root image_dir --save_root save_dir
Table 1: Comparison with State-of-the-art methods on TuSimple testing set.
Figure 2: Qualitative results on TuSimple testing set.
FuseRoad is released under BSD 3-Clause License.