Skip to content

Latest commit

 

History

History
165 lines (136 loc) · 5.58 KB

File metadata and controls

165 lines (136 loc) · 5.58 KB

【ECCV'2024🔥】Quanta Video Restoration

Overview

QUIVER (Quanta Video Restoration) is a deep learning-based framework for restoring quanta video data. The project focuses on enhancing extreme low-light and high-speed imaging through advanced post-processing techniques.

Features

  • Model checkpoints trained on simulated data
  • Optical flow extraction and processing
  • Patch-based training for efficient learning

🧩 Dataset and Pre-train Models

Datasets Model Pre-trained Models
3.25 PPP 9.75 PPP 19.5 PPP 26 PPP
I2_2000FPS QUIVER Link Link Link Link
EMVD Link Link Link Link
Spk2ImgNet Link Link Link Link
FloRNN Link Link Link Link
RVRT Link Link Link Link

Installation

Creating the Conda Environment

To set up the environment, run the following command:

conda env create -f QUIVER_environment.yml

Model Checkpoints

  • All model checkpoints are trained on simulated data.
  • Checkpoints are named using the following convention:
    [model_name]_[past_frames(p)][#past_frames]_[future_frames(f)][#future_frames]_[#photons_per_pixel_per_frame]PPP.pth
    
    • Total input frames: #past_frames + #future_frames + 1

Code Structure

The repository is organized into the following scripts:

1. dataloader.py

  • Loads data directly from MP4 videos.
  • Contains details on the simulation used for training.

2. model.py & archs.py

  • Defines the model architecture and related modules.

3. input_args.py

  • Contains hyperparameters controlling training and testing:
    • num_frames: Number of frames used as input to the model.
    • patch_size: Patch size for training.
    • future_frames: Number of frames to the right of the reference frame.
    • past_frames: Number of frames to the left of the reference frame.
    • weights_dir: Checkpoint path.
    • load_model_flag: Boolean flag to load model checkpoints.
    • lr: Learning rate.
    • batch_size: Batch size during training (default batch size for testing is 1).
    • save_path: Directory to save outputs during testing.

4. test.py

  • Modify the input hyperparameters mentioned above and run the following command for testing:
    python test.py

5. train.py

  • Modify the input hyperparameters mentioned above and run the following command for training:
    python train.py

Post-Processing

  • After generating outputs, post-processing is done using MATLAB’s localtonemap function.
  • The same post-processing is applied to all models including the baselines.

Contact

For any questions or clarifications, feel free to reach out.


This README provides a structured guide to installing, training, and testing QUIVER. Let me know if you'd like any modifications!

Citation

If you find this work useful in your research, please consider citing:

@inproceedings{chennuri2024quanta,
  title={Quanta Video Restoration},
  author={Chennuri, Prateek and Chi, Yiheng and Jiang, Enze and Godaliyadda, GM Dilshan and Gnanasambandam, Abhiram and Sheikh, Hamid R and Gyongy, Istvan and Chan, Stanley H},
  booktitle={European Conference on Computer Vision},
  pages={152--171},
  year={2024},
  organization={Springer}
}
}