(CVPR2024) Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild
[Paper] [Project Page] [Online Demo (Coming soon)]
Fanghua, Yu, Jinjin Gu, Zheyuan Li, Jinfan Hu, Xiangtao Kong, Xintao Wang, Jingwen He, Yu Qiao, Chao Dong
Shenzhen Institute of Advanced Technology; Shanghai AI Laboratory; University of Sydney; The Hong Kong Polytechnic University; ARC Lab, Tencent PCG; The Chinese University of Hong Kong
⚠ Due to the large RAM (60G) and VRAM (30G x2) costs of SUPIR, we are working on the online demo releasing.
-
Clone repo
git clone https://github.com/Fanghua-Yu/SUPIR.git cd SUPIR
-
Install dependent packages
conda create -n SUPIR python=3.8 -y conda activate SUPIR pip install --upgrade pip pip install -r requirements.txt
-
Download Checkpoints
For users who can connect to huggingface, please setting LLAVA_CLIP_PATH, SDXL_CLIP1_PATH, SDXL_CLIP2_CKPT_PTH
in CKPT_PTH.py
as None
. These CLIPs will be downloaded automatically.
-
SUPIR-v0Q
: Baidu Netdisk, Google DriveDefault training settings with paper. High generalization and high image quality in most cases.
-
SUPIR-v0F
: Baidu Netdisk, Google DriveTraining with light degradation settings. Stage1 encoder of
SUPIR-v0F
remains more details when facing light degradations.
- Edit Custom Path for Checkpoints
* [CKPT_PTH.py] --> LLAVA_CLIP_PATH, LLAVA_MODEL_PATH, SDXL_CLIP1_PATH, SDXL_CLIP2_CACHE_DIR * [options/SUPIR_v0.yaml] --> SDXL_CKPT, SUPIR_CKPT_Q, SUPIR_CKPT_F
RealPhoto60: Baidu Netdisk, Google Drive
Usage:
-- python test.py [options]
-- python gradio_demo.py [interactive options]
--img_dir Input folder.
--save_dir Output folder.
--upscale Upsampling ratio of given inputs. Default: 1
--SUPIR_sign Model selection. Default: 'Q'; Options: ['F', 'Q']
--seed Random seed. Default: 1234
--min_size Minimum resolution of output images. Default: 1024
--edm_steps Numb of steps for EDM Sampling Scheduler. Default: 50
--s_stage1 Control Strength of Stage1. Default: -1 (negative means invalid)
--s_churn Original hy-param of EDM. Default: 5
--s_noise Original hy-param of EDM. Default: 1.003
--s_cfg Classifier-free guidance scale for prompts. Default: 7.5
--s_stage2 Control Strength of Stage2. Default: 1.0
--num_samples Number of samples for each input. Default: 1
--a_prompt Additive positive prompt for all inputs.
Default: 'Cinematic, High Contrast, highly detailed, taken using a Canon EOS R camera,
hyper detailed photo - realistic maximum detail, 32k, Color Grading, ultra HD, extreme
meticulous detailing, skin pore detailing, hyper sharpness, perfect without deformations.'
--n_prompt Fixed negative prompt for all inputs.
Default: 'painting, oil painting, illustration, drawing, art, sketch, oil painting,
cartoon, CG Style, 3D render, unreal engine, blurring, dirty, messy, worst quality,
low quality, frames, watermark, signature, jpeg artifacts, deformed, lowres, over-smooth'
--color_fix_type Color Fixing Type. Default: 'Wavelet'; Options: ['None', 'AdaIn', 'Wavelet']
--linear_CFG Linearly (with sigma) increase CFG from 'spt_linear_CFG' to s_cfg. Default: False
--linear_s_stage2 Linearly (with sigma) increase s_stage2 from 'spt_linear_s_stage2' to s_stage2. Default: False
--spt_linear_CFG Start point of linearly increasing CFG. Default: 1.0
--spt_linear_s_stage2 Start point of linearly increasing s_stage2. Default: 0.0
--ae_dtype Inference data type of AutoEncoder. Default: 'bf16'; Options: ['fp32', 'bf16']
--diff_dtype Inference data type of Diffusion. Default: 'fp16'; Options: ['fp32', 'fp16', 'bf16']
# Seek for best quality for most cases
CUDA_VISIBLE_DEVICES=0,1 python test.py --img_dir '/opt/data/private/LV_Dataset/DiffGLV-Test-All/RealPhoto60/LQ' --save_dir ./results-Q --SUPIR_sign Q --upscale 2
# for light degradation and high fidelity
CUDA_VISIBLE_DEVICES=0,1 python test.py --img_dir '/opt/data/private/LV_Dataset/DiffGLV-Test-All/RealPhoto60/LQ' --save_dir ./results-F --SUPIR_sign F --upscale 2 --s_cfg 4.0 --linear_CFG
CUDA_VISIBLE_DEVICES=0,1 python gradio_demo.py --ip 0.0.0.0 --port 6688 --use_image_slider --log_history
# less VRAM & slower (12G for Diffusion, 16G for LLaVA)
CUDA_VISIBLE_DEVICES=0,1 python gradio_demo.py --ip 0.0.0.0 --port 6688 --use_image_slider --log_history --loading_half_params --use_tile_vae --load_8bit_llava
@misc{yu2024scaling,
title={Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild},
author={Fanghua Yu and Jinjin Gu and Zheyuan Li and Jinfan Hu and Xiangtao Kong and Xintao Wang and Jingwen He and Yu Qiao and Chao Dong},
year={2024},
eprint={2401.13627},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
This project, "Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild," builds upon work originally licensed under the MIT License. Modifications and additional contributions made in this fork are licensed under the GNU Affero General Public License (AGPL). This dual licensing approach allows users to choose which license to comply with, as follows:
- The original code from the upstream repository remains under the MIT License. See LICENSE for the MIT License terms.
- Modifications and contributions made by FurkanGozukara and any subsequent contributions to this fork are under the AGPL. For more details on the AGPL and its terms, please see LICENSE_AGPL.md.
This dual-licensing model is designed to ensure that contributions and modifications made to this project remain free and open, in accordance with the principles of the AGPL, while respecting the open-source nature of the original work under the MIT License.
If you have any question, please email [email protected]
.