-
Notifications
You must be signed in to change notification settings - Fork 6
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Your Name
committed
Aug 15, 2024
0 parents
commit 6dac859
Showing
29 changed files
with
81,127 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,61 @@ | ||
# Installation | ||
|
||
### Dependencies Installation | ||
|
||
This repository is built in PyTorch 1.8.1 and tested on Ubuntu 22.04 environment (Python3.8, CUDA11.6, cuDNN8.5). | ||
Follow these intructions | ||
|
||
1. Clone our repository | ||
``` | ||
https://github.com/toummHus/HAIR | ||
cd HAIR | ||
``` | ||
|
||
2. Create conda environment | ||
The Conda environment used can be recreated using the env.yml file | ||
``` | ||
conda env create -f env.yml | ||
``` | ||
|
||
|
||
### Dataset Download and Preperation | ||
|
||
All the 5 datasets used in the paper can be downloaded from the following locations: | ||
|
||
Denoising: [BSD400](https://drive.google.com/file/d/1idKFDkAHJGAFDn1OyXZxsTbOSBx9GS8N/view?usp=sharing), [WED](https://drive.google.com/file/d/19_mCE_GXfmE5yYsm-HEzuZQqmwMjPpJr/view?usp=sharing), [Urban100](https://drive.google.com/drive/folders/1B3DJGQKB6eNdwuQIhdskA64qUuVKLZ9u) | ||
|
||
Deraining: [Train100L&Rain100L](https://drive.google.com/drive/folders/1-_Tw-LHJF4vh8fpogKgZx1EQ9MhsJI_f?usp=sharing) | ||
|
||
Dehazing: [RESIDE](https://sites.google.com/view/reside-dehaze-datasets/reside-v0) (OTS) | ||
|
||
Deblur: [Gopro](https://seungjunnah.github.io/Datasets/gopro) | ||
|
||
Low-light Enhancement: [LOL](https://drive.google.com/file/d/157bjO1_cFuSd0HWDUuAmcHRJDVyWpOxB/view) | ||
|
||
The training data should be placed in ``` data/Train/{task_name}``` directory where ```task_name``` can be Denoise,Derain or Dehaze. | ||
After placing the training data the directory structure would be as follows: | ||
``` | ||
└───Train | ||
├───Dehaze | ||
│ ├───original | ||
│ └───synthetic | ||
├───Denoise | ||
└───Derain | ||
├───gt | ||
└───rainy | ||
``` | ||
|
||
The testing data should be placed in the ```test``` directory wherein each task has a seperate directory. The test directory after setup: | ||
|
||
``` | ||
├───dehaze | ||
│ ├───input | ||
│ └───target | ||
├───denoise | ||
│ ├───bsd68 | ||
│ └───urban100 | ||
└───derain | ||
└───Rain100L | ||
├───input | ||
└───target | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,88 @@ | ||
# <img src = "pngs/barber.png" style="zoom:27%;" > HAIR: Hypernetworks-based All-in-One Image Restoration | ||
|
||
[](https://arxiv.org/abs/2306.13090) | ||
|
||
|
||
<hr /> | ||
|
||
> **Abstract:** *Image restoration involves recovering a high-quality clean image from its degraded version, which is a fundamental task in computer vision. Recent progress in image restoration has demonstrated the effectiveness of learning models capable of addressing various degradations simultaneously, i.e., the All-in-One image restoration models. However, these existing methods typically utilize the same parameters facing images with different degradation types, which causes the model to be forced to trade off between degradation types, therefore impair the total performance. To solve this problem, we propose HAIR, a Hypernetworks-based plug-in-and-play method that dynamically generated parameters for the corresponding networks based on the contents of input images. HAIR consists of 2 main components: Classifier (Cl) and Hyper Selecting Net (HSN). To be more specific, the Classifier is a simple image classification network which is used to generate a Global Information Vector (GIV) that contains the degradation information of the input image; And the HSNs can be seen as a simple Fully-connected Neural Network that receive the GIV and output parameters for the corresponding modules. Extensive experiments shows that incorporating HAIR into the architectures can significantly improve the performance of different models on image restoration tasks at a low cost, **although HAIR only generate parameters and haven't change these models' logical structures at all.** With incorporating HAIR into the popular model Restormer, our method obtains superior or at least comparable performance to current state-of-the-art methods on a range of image restoration tasks.* | ||
<hr /> | ||
|
||
## Network Architecture | ||
|
||
<img src = "pngs/arch.png"> | ||
|
||
## Installation and Data Preparation | ||
|
||
See [INSTALL.md](INSTALL.md) for the installation of dependencies and dataset preperation required to run this codebase. (Note that this repository is prepared for 3-degradation setting. For 5-degradation setting, please refer [IDR](https://github.com/JingHao99/IDR-Ingredients-oriented-Degradation-Reformulation). | ||
|
||
## Training | ||
|
||
After preparing the training data in ```data/``` directory, use | ||
``` | ||
python train.py | ||
``` | ||
to start the training of the model. Use the ```de_type``` argument to choose the combination of degradation types to train on. By default it is set to all the 3 degradation types (noise, rain, and haze). | ||
|
||
Example Usage: If we only want to train on deraining and dehazing: | ||
``` | ||
python train.py --de_type derain dehaze | ||
``` | ||
|
||
## Testing | ||
|
||
After preparing the testing data in ```test/``` directory, place the mode checkpoint file in the ```ckpt``` directory. The pretrained model can be downloaded [here](https://drive.google.com/file/d/1j-b5Od70pGF7oaCqKAfUzmf-N-xEAjYl/view?usp=sharingg), alternatively, it is also available under the releases tab. To perform the evalaution use | ||
``` | ||
python test.py --mode {n} | ||
``` | ||
```n``` is a number that can be used to set the tasks to be evaluated on, 0 for denoising, 1 for deraining, 2 for dehaazing and 3 for all-in-one setting. | ||
|
||
Example Usage: To test on all the degradation types at once, run: | ||
|
||
``` | ||
python test.py --mode 3 | ||
``` | ||
|
||
## Demo | ||
To obtain visual results from the model ```demo.py``` can be used. After placing the saved model file in ```ckpt``` directory, run: | ||
``` | ||
python demo.py --test_path {path_to_degraded_images} --output_path {save_images_here} | ||
``` | ||
Example usage to run inference on a directory of images: | ||
``` | ||
python demo.py --test_path './test/demo/' --output_path './output/demo/' | ||
``` | ||
Example usage to run inference on an image directly: | ||
``` | ||
python demo.py --test_path './test/demo/image.png' --output_path './output/demo/' | ||
``` | ||
To use tiling option while running ```demo.py``` set ```--tile``` option to ```True```. The Tile size and Tile overlap parameters can be adjusted using ```--tile_size``` and ```--tile_overlap``` options respectively. | ||
|
||
|
||
|
||
|
||
## Results | ||
Performance results of the PromptIR framework trained under the all-in-one setting | ||
|
||
**Performance** | ||
|
||
<img src = "pngs/hair3d-results.png"> | ||
|
||
<img src = "pngs/hair5d-results.png"> | ||
|
||
**Visual Results** | ||
<img src = "pngs/hairvisual.png" style="zoom: 67%;" > | ||
|
||
## Citation | ||
|
||
If you use our work, please consider citing: | ||
|
||
@inproceedings{ | ||
} | ||
|
||
## Contact | ||
|
||
Should you have any questions, please contact [email protected]. | ||
|
||
**Acknowledgment:** This repository is highly based on the [PromptIR](https://github.com/va1shn9v/PromptIR) repository, thanks for the great work. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
Pre-trained model to perform all-in-one blind (3 degradations) image restoration is available [here](https://drive.google.com/file/d/1Zr0gy8MPFI6q0rytGXuqyqLKrk8bBeQg/view?usp=sharing) |
Empty file.
Oops, something went wrong.