Skip to content

Commit 51792ee

Browse files
committed
deleting .idea config files and changing structure
1 parent f4eca76 commit 51792ee

19 files changed

+23
-396
lines changed

.idea/Unnamed.iml

Lines changed: 0 additions & 11 deletions
This file was deleted.

.idea/inspectionProfiles/Project_Default.xml

Lines changed: 0 additions & 12 deletions
This file was deleted.

.idea/misc.xml

Lines changed: 0 additions & 7 deletions
This file was deleted.

.idea/modules.xml

Lines changed: 0 additions & 8 deletions
This file was deleted.

.idea/sonarlint/issuestore/1/9/19359a61ae2446b51b549167b014da2fcf265768

Whitespace-only changes.

.idea/sonarlint/issuestore/8/e/8ec9a00bfd09b3190ac6b22251dbb1aa95a0579d

Whitespace-only changes.

.idea/sonarlint/issuestore/index.pb

Lines changed: 0 additions & 19 deletions
This file was deleted.

.idea/vcs.xml

Lines changed: 0 additions & 6 deletions
This file was deleted.

.idea/workspace.xml

Lines changed: 0 additions & 251 deletions
This file was deleted.

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ python3 dataset/combine_images.py
4444
python3 split_dataset.py
4545
```
4646

47-
After that we will have two folders ```train```and ```test```with the prepared data to train. Remember that if you are using **Google Colab** you should upload those folders.
47+
After that we will have two folders ```train```and ```test```with the prepared data to train. Remember that if you are using **Google Colab** you should upload those folders in a folder called ```dataset```.
4848

4949
## Run
5050

@@ -54,4 +54,4 @@ python3 -m src.train --dataset PATH_TO_DATASET
5454

5555
## Improvements
5656

57-
The dataset was really small (about X images) so a good improvement could be to increase the dataset to see if the model improves its performance. Also, pix2pixHD improvements by Nvidia could be applied in order to output sharper and more define images.
57+
The dataset was really small (about X images) so a good improvement could be to increase the dataset to see if the model improves its performance. Also, pix2pixHD improvements by Nvidia could be applied in order to output sharper and more define images. Actually, my first choice was to try to implement it in Tensorflow since the only implementation I have found is the original one in Pytorch, but after reading the paper I decided that it was too difficult for a person who does not even have a proper GPU: 3 different discriminators with different scales, feature matching loss using features from each discriminator and two different generators, the local enhancer and the global network who must be trained separately and then fine-tuned together.

0 commit comments

Comments
 (0)