How to extract pixelwise features? #1181
Replies: 2 comments
-
@morsingher only VGG and I think one other model (DarkNet?) has at least one activated stem layer without a downsample (so HxW feature map is possible), however that feature map has a depth of 1 so not very useful by itself. To get a feature map with depth you pretty much have to upsample. |
Beta Was this translation helpful? Give feedback.
-
@rwightman thanks for your answer! I have a follow-up question, which you may find pretty basic but I'll still try to ask it. Image models pre-trained on ImageNet will assume an input size of 224 x 224. I have tried to extract a pyramid of features with a ResNet and an arbitrary size H x W, and it does work indeed. So is this just the way to go? Can I run inference with arbitrary size to extract features for H x W images? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
In the documentation (here: https://rwightman.github.io/pytorch-image-models/feature_extraction/#multi-scale-feature-maps-feature-pyramid), it is said that "By default 5 strides will be output from most models (not all have that many), with the first starting at 2 (some start at 1 or 4)". Is there any model where features are extracted for each pixel of the input?
To be more clear, I would like to map a H x W image to a H x W x C feature maps. A naive way would be to upsample a low-resolution feature map, but I was wondering if it can be done natively with some model.
Thank you in advance for the help!
Beta Was this translation helpful? Give feedback.
All reactions