-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add video preprocessing (denoising) feature #83
base: main
Are you sure you want to change the base?
Conversation
Pull Request Test Coverage Report for Build 12208523886Details
💛 - Coveralls |
1385223
to
bc9a268
Compare
omfg you already did it!?!??!?!?! u are so fast. cant wait to check this out!!!!! |
f051300
to
87e8a44
Compare
amazing!!!! Thanks, Takuya for working on this already!
|
agree, I think we want to drop data rather than copy data - it's scientific data after all and the numbers matter, so duplicating a frame is effectively fabricating data (even though this is not a bad strategy if we were streaming a movie and didn't care about the pixel values as much)
this is a decent pair of thoughts that sort of highlights that we might want to separate cosmetic/display-oriented processing stages from signal repair stages: the frequency filtering is repairing the image, while doing the minimum projection is just for display purposes (right? i'm assuming in analysis people will want to have the full signal). been working through backlog of issues after the holiday and my first big chunk of work will be on mio so i'll review this and stuff this week. thanks again for this Takuya :) |
Thanks for the comments! I'm stuck with other stuff at the moment but will make it ready for review and ping you guys hopefully next week. Something I kind of want to know from @sneakers-the-rat is whether the high-level structure seems mergeable to the pipeline refactor. No intention to make this really compatible because we need this now and this is an independent module but the code structure is pretty arbitrary so it might be good if there is something I can easily anchor to. And yes, Phil mentioned too dropping broken frames will be good so that's planned (might just add an option of dropping because masking is already there). I was just too lazy to look in how processing pipelines handle this and track dropped buffers. For the FFT and minimum projection, these are just modules so I think these just should be available whenever needed. The frequency mask might be used in actual preprocessing before CMNE pipelines but I'm not sure. |
…ine before actually working on instantiating, add target wirefree pipeline config
… variable errors, corrected types in signatures, use optimized double-diff instead of double-allocating
Broken buffer detection - contrast method
extensive test for gradient detection
Many refactors are going on in #99. It'll make sense to merge that before looking into this PR. Thanks for all the feedback. I might have missed something, but I went through all the comments.
|
This PR adds a denoising module for neural recording videos. It can be used offline, and with some updates, it could be used in real-time with the
streamDaq
.It would be better to quickly merge this to the main branch to add other processing features in a relatively stable location. Also, nothing depends on this now anyway.
Processing features
Remove broken buffers by copying in the same position buffer in the previous frame.Following is an example frame that detects noisy regions and patches it with the prior frame buffer:
![image](https://private-user-images.githubusercontent.com/33111879/393445756-397089ae-fec3-4731-9795-0005ec315934.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyOTQ5MzMsIm5iZiI6MTczOTI5NDYzMywicGF0aCI6Ii8zMzExMTg3OS8zOTM0NDU3NTYtMzk3MDg5YWUtZmVjMy00NzMxLTk3OTUtMDAwNWVjMzE1OTM0LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjExVDE3MjM1M1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTc5YTNlZGRkOTQ0YjdkZTA1ZWM3MGFkY2FmY2U0ZGFmY2M0OGU3MTFmZmUzYTNiODgxMjJhYjZkMTk0OTA3MjkmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.iZxkZJIzE1X3DgDnc-j0LGR__N45wi1I1KBiiCMZmi0)
Interface
You can run this with
mio process denoise
using an example video.The denoising parameters and export settings can be defined with a YAML file.
Minor changes
VideoWriter
toio
module because it's pretty generic, and I wanted to use it for the denoising.📚 Documentation preview 📚: https://miniscope-io--83.org.readthedocs.build/en/83/