Question about RandomErasing and normalization #1357
Replies: 1 comment 1 reply
-
@PaulHeraty normalize is using those imagenet stats with the goal of making the data fed to the network have mean 0 and std-dev 1.0, that's why random erasing in timm is placed after normalize (unlike some impl that erase with 'black' squares before normalizing and then end up throwing the mean off). It won't be perfect, especially wrt to extremes for the spread. I doubt this makes a significant difference, but if you are concerned or curious, you could try using a truncated normal to closer match the range and see if it has any impact. You'd want to leave the std-dev at 1.0 though but use upper/lower bounds on the trunc normal to match the equivalent you get scaling a [0,1] value by the image net stats... |
Beta Was this translation helpful? Give feedback.
-
Hello,
when I set up an augmentation pipeline using create_transform(), it calls ToTensor() and Normalize() before RandomErasing(). The question I have relates to the normalization used. By default, Normalize is normalizing around the mean and std deviation of ImgNet which are (0.485, 0.456, 0.406) and (0.229, 0.224, 0.225) respectively. So after Normalize, my image is in the range of roughly -2.11 to +2.64.
However, it seems that the RandomErasing augmenetation uses torch.normal_() to set the pixel values for any erased segments, and torch.normal_() uses a mean of 0.0 and std of 1.0. This means the pixel values in the erased segment have values in the range of roughly -4 to +4.
Is this a bug? Should RandomErasing also call torch.normal_() with the same mean/std as used by Normalize()?
Beta Was this translation helpful? Give feedback.
All reactions