Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add video preprocessing (denoising) feature #83

Open
wants to merge 68 commits into
base: main
Choose a base branch
from

Conversation

t-sasatani
Copy link
Collaborator

@t-sasatani t-sasatani commented Dec 7, 2024

This PR adds a denoising module for neural recording videos. It can be used offline, and with some updates, it could be used in real-time with the streamDaq.

It would be better to quickly merge this to the main branch to add other processing features in a relatively stable location. Also, nothing depends on this now anyway.

Processing features

  • Detect broken buffers by comparing buffers with the same position buffer in the previous frame (I'm only making mean error now, but we can also add more statistic functions).
  • Gradient-based broken frame detection led by @MarcelMB. Merged from Broken buffer detection - contrast method #94
  • Remove frames that have broken buffers. These broken frames are individually stacked and tracked to examine which frame got removed. Remove broken buffers by copying in the same position buffer in the previous frame.
  • Spatial frequency-based filtering.
  • Generate the minimum projection out of a stack of frames.

Following is an example frame that detects noisy regions and patches it with the prior frame buffer:
image

Interface

You can run this with mio process denoise using an example video.
The denoising parameters and export settings can be defined with a YAML file.

mio process denoise -i .\user_dir\test.avi -c denoise_example

Minor changes

  • I moved VideoWriter to io module because it's pretty generic, and I wanted to use it for the denoising.

📚 Documentation preview 📚: https://miniscope-io--83.org.readthedocs.build/en/83/

@coveralls
Copy link
Collaborator

coveralls commented Dec 7, 2024

Pull Request Test Coverage Report for Build 12208523886

Details

  • 17 of 346 (4.91%) changed or added relevant lines in 8 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-14.7%) to 62.478%

Changes Missing Coverage Covered Lines Changed/Added Lines %
miniscope_io/cli/main.py 0 2 0.0%
miniscope_io/cli/process.py 0 12 0.0%
miniscope_io/io.py 15 32 46.88%
miniscope_io/models/process.py 0 32 0.0%
miniscope_io/models/frames.py 0 52 0.0%
miniscope_io/plots/video.py 0 59 0.0%
miniscope_io/process/video.py 0 155 0.0%
Totals Coverage Status
Change from base Build 12208310258: -14.7%
Covered Lines: 1069
Relevant Lines: 1711

💛 - Coveralls

@t-sasatani t-sasatani force-pushed the feat-preprocess branch 3 times, most recently from 1385223 to bc9a268 Compare December 7, 2024 01:05
@sneakers-the-rat
Copy link
Collaborator

omfg you already did it!?!??!?!?! u are so fast. cant wait to check this out!!!!!

@coveralls
Copy link
Collaborator

coveralls commented Dec 11, 2024

Coverage Status

coverage: 75.587% (-3.5%) from 79.112%
when pulling bf7ee75 on feat-preprocess
into 56397e9 on main.

@MarcelMB
Copy link
Contributor

MarcelMB commented Jan 8, 2025

amazing!!!! Thanks, Takuya for working on this already!

  • I was wondering if replacing a broken buffer with the one from the previous frame is a good strategy. Unsure. For one broken frame that might be fine but if there is a stretch of corrupted data and we basically have the same image for seconds not sure.

  • by subtracting a minimum projection at the end I was wondering if maybe not low-intensity biological signals could eventually get lost?

  • I need to understand the frequency mask a bit more to see what goes through and what is blocked.

@sneakers-the-rat
Copy link
Collaborator

I was wondering if replacing a broken buffer with the one from the previous frame is a good strategy. Unsure. For one broken frame that might be fine but if there is a stretch of corrupted data and we basically have the same image for seconds not sure.

agree, I think we want to drop data rather than copy data - it's scientific data after all and the numbers matter, so duplicating a frame is effectively fabricating data (even though this is not a bad strategy if we were streaming a movie and didn't care about the pixel values as much)

by subtracting a minimum projection at the end I was wondering if maybe not low-intensity biological signals could eventually get lost?

I need to understand the frequency mask a bit more to see what goes through and what is blocked.

this is a decent pair of thoughts that sort of highlights that we might want to separate cosmetic/display-oriented processing stages from signal repair stages: the frequency filtering is repairing the image, while doing the minimum projection is just for display purposes (right? i'm assuming in analysis people will want to have the full signal).

been working through backlog of issues after the holiday and my first big chunk of work will be on mio so i'll review this and stuff this week. thanks again for this Takuya :)

@t-sasatani
Copy link
Collaborator Author

Thanks for the comments! I'm stuck with other stuff at the moment but will make it ready for review and ping you guys hopefully next week. Something I kind of want to know from @sneakers-the-rat is whether the high-level structure seems mergeable to the pipeline refactor. No intention to make this really compatible because we need this now and this is an independent module but the code structure is pretty arbitrary so it might be good if there is something I can easily anchor to.

And yes, Phil mentioned too dropping broken frames will be good so that's planned (might just add an option of dropping because masking is already there). I was just too lazy to look in how processing pipelines handle this and track dropped buffers.

For the FFT and minimum projection, these are just modules so I think these just should be available whenever needed. The frequency mask might be used in actual preprocessing before CMNE pipelines but I'm not sure.
https://github.com/Aharoni-Lab/Miniscope-v4/wiki/Removing-Horizontal-Noise-from-Recordings

sneakers-the-rat and others added 4 commits January 17, 2025 18:01
…ine before actually working on instantiating, add target wirefree pipeline config
… variable errors, corrected types in signatures, use optimized double-diff instead of double-allocating
@t-sasatani
Copy link
Collaborator Author

t-sasatani commented Feb 2, 2025

Many refactors are going on in #99. It'll make sense to merge that before looking into this PR.

Thanks for all the feedback. I might have missed something, but I went through all the comments.

  • We should make gradient the primary noise detection approach, but I am also thinking about leaving the mean_error method in case with some warnings (probably until we start benchmarking stuff and combining methods). This does make the helper methods slightly more complex so let me know if I should move this to a different branch or something.
  • I think the YAML interface and export path need improvement, but that's pretty minor, so I think we can push that to future PRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants