Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Combined detector / black area detector / bugfix process frame index #99

Merged
merged 9 commits into from
Feb 12, 2025

Conversation

MarcelMB
Copy link
Contributor

@MarcelMB MarcelMB commented Jan 31, 2025

The _detect_black_pixels looks if consecutive pixels in a row are black

black_pixel_consecutive_threshold: 5 # If 5 consecutive pixels in a row are black, discard the frame black_pixel_value_threshold: 16 # Any pixel value ≤ 16 is considered "black"

those two parameters in denoise_examle.yaml decides how many consecutive pixels need to be black and what is defined as black (black pixels don't have a value of zero but when I checked was like 16)

updated detect_frame_with_noisy_buffer to work with two methods simultaneously

added test_frame_helper.py to the test to also test the gradient method on blacked_out subcategory frames (since the gradient method now also incorporates finding black frames)
test all pass (except the xfail that jonny marked for obvious reasons)

curious about if this makes sense to you guys or if there are any major concerns!

example image in debug mode

Screenshot 2025-01-31 at 2 49 10 PM

detects error in this image:
Screenshot 2025-01-31 at 2 57 00 PM


📚 Documentation preview 📚: https://miniscope-io--99.org.readthedocs.build/en/99/

@MarcelMB MarcelMB added the enhancement New feature or request label Jan 31, 2025
@MarcelMB MarcelMB self-assigned this Jan 31, 2025
@sneakers-the-rat
Copy link
Collaborator

sorry i was being unclear when i said separate "method" - i meant method there as being like "means of detecting noise" not "python method." lets split each of these into a different class with a shared calling convention, bc we want to be able to apply them together (do black pixel detection, gradient detection, and other preprocessing steps) and make it clear which config values correspond to each of those different strategies (we already had one almost-bug from reusing the threshold param) :)

i'll handle this next week

@sneakers-the-rat
Copy link
Collaborator

sneakers-the-rat commented Feb 1, 2025

question for y'all - is there any time where we would expect there to be black areas, like daniel was mentioning that sometimes there might be an all-black row or column on the sides of the image? I think that consecutive black pixels (as in literally value == 0, not just a low value) is the right test, just checking if there are any edge cases where we might need to e.g. check for two consecutive rows, or check that the black pixels aren't on the borders

edit: oh wait i see now that it isn't value == 0, but below some threshold, i haven't even looked to see whether the pixels we are seeing are literally zero or not, but it seems like we might want to make that be the requirement: we should expect there to be sequential dark-but-not-zero pixels in many if not most videos i think? but being literally 0 is unlikely

@t-sasatani
Copy link
Collaborator

t-sasatani commented Feb 2, 2025

My understanding is that the problematic black regions here occur because of the black padding in the streamDaq. So:

  • I think we should start by finding pixels that are strictly 0. Except for the padded black zones, I can't think of any other dark regions we should directly eliminate.
  • Finding pixels with intensity under 16 could also eliminate dark areas, depending on the lighting conditions (tho this is unlikely to happen with our setup with low contrast).
  • Unfortunately, the always black rows/columns are something we wish we had and are not present in the image now.
  • We should eventually move this into the streamDaq or use metadata from that, though I like doing it here for now to simplify/decouple stuff.

@MarcelMB
Copy link
Contributor Author

MarcelMB commented Feb 3, 2025

sorry i was being unclear when i said separate "method" - i meant method there as being like "means of detecting noise" not "python method." lets split each of these into a different class with a shared calling convention, bc we want to be able to apply them together (do black pixel detection, gradient detection, and other preprocessing steps)

  • that's possible now. In this PR you can run gradient (so check pattern noise) and black detection simultaneously. and they also use different config values not a shared threshold

just checking if there are any edge cases where we might need to e.g. check for two consecutive rows, or check that the black pixels aren't on the borders

  • it make sense to be that we don't just check for 5 consecutive black pixels in one row and mark it as black. I think we would not except any black pixels in the real data at least not below a threshold of like 20. But we can change the criteria to say 2 or 3 rows should have black pixels to me a bit more liberal

  • Right now the minimum pixel is always 16 (no 0 pixel value in the data)

Unfortunately, the always black rows/columns are something we wish we had and are not present in the image now.

  • for what would we want to use this? just curious.

@jonny how would you structure adding this black method differently? You can go ahead and add commits here and I can take a look.

@sneakers-the-rat
Copy link
Collaborator

Right now the minimum pixel is always 16

interesting... why is that? I would think that it would be 0 from here

@jonny how would you structure adding this black method differently?

In general, this is a more flexible pattern:

class ThingDoer:

    @abstractmethod
    def do(self, data) -> result:
        ...

class Version1(ThingDoer):
    def do(self, data) -> result:
        # do it version 1's way
        
class Version2(ThingDoer):
    def do(self, data) -> result:
        # do it version 2's way

than this

class ThingDoer:
    def do_version_1(self, data) -> result: ...
    
    def do_version_2(self, data) -> result: ...

for a few reasons, primarily that we then have a consistent calling pattern - "all Thing Doers behave the same way" and they have all their functionality decoupled: e.g. rather than checking if there are no values or accidentally messing up the processing from one method by changing another as in the 'process first frame' bug from last week, we avoid those problems in the first place as well as avoiding needing a dispatching method. it's also way clearer to end users than stacking them up into a single class - even if the config parameters are independent and we know that, to an end user looking at that they'll be like "which of these are relevant to what again?" as opposed to being like "ah yes the BlackNoiseDetection class takes these specific config values..." and this will also become v relevant when these get integrated into the pipelining system

@t-sasatani
Copy link
Collaborator

t-sasatani commented Feb 4, 2025

Right now the minimum pixel is always 16

Very interesting, and this is completely unexpected. It might be a new type of error or some glitch in casting, encoding, file i/o, etc. I can't work on this today but I'll take a look later too.

for what would we want to use this? just curious.

We had some discussions on Slack about how this could potentially help eliminate the stripes. Though it's unlikely that this will help, it still could be worth a try if this area exists.

@t-sasatani t-sasatani changed the base branch from gradient_first_frame to feat-preprocess February 5, 2025 04:01
@t-sasatani t-sasatani self-assigned this Feb 6, 2025
@t-sasatani t-sasatani force-pushed the black_out_method branch 2 times, most recently from 25ea778 to fa36990 Compare February 6, 2025 05:37
@t-sasatani t-sasatani changed the title black frame method integrated as a second method inside gradient method Combined detector / black area detector / bugfix process frame index Feb 6, 2025
@t-sasatani
Copy link
Collaborator

t-sasatani commented Feb 6, 2025

I made some updates so multiple detection methods can be mixed and did some general refactoring around the processing module after chatting with @MarcelMB. Reviewing this and the feat-preprocess branch separately doesn't make sense, as these are tightly coupled. So, I'll merge this as soon as the pixel baseline bug (if it is a bug) is resolved.

So now the config of the patch-based noise detection looks like this:

noise_patch:
  enable: true
  method: [gradient, black_area]
  mean_error_config:
    threshold: 40
    device_config_id: wireless-200px
    buffer_split: 8
    comparison_unit: 1000
    diff_multiply: 1
  gradient_config:
    threshold: 20
  black_area_config:
    consecutive_threshold: 5
    value_threshold: 16
  output_result: true
  output_noise_patch: true
  output_diff: true
  output_noisy_frames: true

and each detector is combined like this:

class InvalidFrameDetector(BaseSingleFrameHelper):
    """
    Helper class for combined invalid frame detection.
    """

    def __init__(self, noise_patch_config: NoisePatchConfig):
        """
        Initialize the FrameProcessor object.
        Block size/buffer size will be set by dev config later.

        Returns:
            NoiseDetectionHelper: A NoiseDetectionHelper object
        """
        self.config = noise_patch_config
        if noise_patch_config.method is None:
            raise ValueError("No noise detection methods provided")
        self.methods = noise_patch_config.method

        if "mean_error" in self.methods:
            self.mse_detector = MSENoiseDetector(noise_patch_config.mean_error_config)
        if "gradient" in self.methods:
            self.gradient_detector = GradientNoiseDetector(noise_patch_config.gradient_config)
        if "black_area" in self.methods:
            self.black_detector = BlackAreaDetector(noise_patch_config.black_area_config)

    def find_invalid_area(self, frame: np.ndarray) -> Tuple[bool, np.ndarray]:
        """
        Process a single frame and verify if it is valid.

        Parameters:
            frame (np.ndarray): The frame to process.

        Returns:
            Tuple[bool, np.ndarray]: A boolean indicating invalid frames and the invalid area.
        """
        noisy_flag = False
        combined_noisy_area = np.zeros_like(frame, dtype=np.uint8)

        if "mean_error" in self.methods:
            noisy, noisy_area = self.mse_detector.find_invalid_area(frame)
            combined_noisy_area = np.maximum(combined_noisy_area, noisy_area)
            noisy_flag = noisy_flag or noisy

        if "gradient" in self.methods:
            noisy, noisy_area = self.gradient_detector.find_invalid_area(frame)
            combined_noisy_area = np.maximum(combined_noisy_area, noisy_area)
            noisy_flag = noisy_flag or noisy

        if "black_area" in self.methods:
            noisy, noisy_area = self.black_detector.find_invalid_area(frame)
            combined_noisy_area = np.maximum(combined_noisy_area, noisy_area)
            noisy_flag = noisy_flag or noisy

        return noisy_flag, combined_noisy_area

@t-sasatani
Copy link
Collaborator

t-sasatani commented Feb 12, 2025

I looked into the video output from the stream module, and there are areas of 16 at that point. So, it's probably not a bug in this module. I just posted issue #101 and will get this merged into the base denoising branch.

@t-sasatani t-sasatani merged commit 00542e3 into feat-preprocess Feb 12, 2025
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants