Skip to content

Commit

Permalink
Merge branch 'main' of https://github.com/jsten07/CNNvsTransformer in…
Browse files Browse the repository at this point in the history
…to main
  • Loading branch information
jsten07 committed Jan 4, 2024
2 parents 303cdbd + 89481d7 commit 77b5953
Showing 1 changed file with 32 additions and 17 deletions.
49 changes: 32 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,50 @@
# CNNvsTransformerComparing CNN- and Transformer-Based Deep Learning Models for Semantic Segmentation of Remote Sensing Images
Master Thesis at the Institute for Geoinformatics, University of Münster, Germany.
# CNNvsTransformer: Comparing CNN- and Transformer-Based Deep Learning Models for Semantic Segmentation of Remote Sensing Images
Master Thesis at the Institute for Geoinformatics, University of Münster, Germany.

### Description
Dataset used: [ISPRS Benchmark on Semantic Labeling](https://www.isprs.org/education/benchmarks/UrbanSemLab/default.aspx) (just used Potsdam dataset so far)

### How To
With the given code, results for the thesis mentioned above were created.
A pdf of the thesis will be added soon.

1. Prepare dataset in following folder structure:
In the following some basic steps to use the code are described but might need some adaptations, based on the utilized infrastructure.

Further documentation will be added soon.

### How To Train the Models
1. Prepare data
1. Download data, for example one of:
1. [ISPRS Benchmark on Semantic Labeling](https://www.isprs.org/education/benchmarks/UrbanSemLab/default.aspx)
2. [FloodNet](https://github.com/BinaLab/FloodNet-Challenge-EARTHVISION2021)
2. Patchify data into appropriate sizes, e.g. $512\times 512$
1. `\helper\patchify.py` might be useful
3. Split data into train and test data in the following folder structure, with `rgb` folders containing the corresponding images to the groundtruth labels in the `label` folders:
```
|-- data
| |-- rgb
| |-- rgb_valid
| |-- rgb_test
| |-- label
| |-- label_valid
| |-- label_test
```
- with ground truth images in folders `/label` having the same name as the corresponding images in folders `/rgb`.
- Ideally also add folder `rgb_test` and `label_test` and put respective data in there by splitting all the data into train, validation and test data.

2. Install requirements (compare `/PALMA/requirements.txt` and modules loaded in `/PALMA/train_unet.sh`; respective requirements file will be added later).
3. Train model with
2. If you use the PALMA cluster, just adapt and run one of `\PALMA\train_unet.sh` or `\PALMA\train_segformer.sh` and you are done
3. Otherwise: Install requirements (see `\PALMA\requirements.txt` and modules in `\PALMA\train_unet.sh`)
4. Look at possible parameters in `train.py` and run the following line with respective adjustments:
```
python3 train.py --data_path /your/path/to/folder/data --output_path ./weights
python3 train.py --data_path /your/path/to/folder/data --name ./weights
```
5. Find your model in folder `./weights`

By default a U-Net model will be trained for 20 epochs. Further default settings can be derived from the parameters of `train.py`.

- By default a U-Net (alternative: `--model segformer`) is trained with 20 epochs (alternative: `--epochs integer`).
### Evaluation and Visualization

- Furhter script arguments are listed in `/train.py` and can be shown with `python3 train.py --help`.
The evaluation and visualization of the models was done with help of the notebooks in the respective `./Notebooks` directory.

4. Find your model weights in folder `./weights`.
- `compare.ipynb`: comparison of two models by visualizing predictions of both and calculating metrics on test data
- `homogeneity.ipynb`: calculation of clustering evaluation measures
- `radarchart.ipynb`: plot radar charts and bar charts for the calculated metrics
- `count_classes.ipynb`: count pixels per class and mean and standard deviation in an image dataset
- OUTDATED: `Segformer_Run.ipynb`, `UNet_Run.ipynb`, `Segformer_visualize_attention.ipynb`

5. Visualize and compare your models using the respective Notebooks in `/Notebooks` by changing the model paths and names and the data path.

### Acknowledgements

Expand Down

0 comments on commit 77b5953

Please sign in to comment.