-
Hi, Efficientnet: Regnety: Thank you in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 7 replies
-
@strugoeli there's actually some relevant discussion in this issue (feature request) #1141 .... but the creator of that issue was looking for both the embeddings and predictions. If you just want the embeddings (the activations right before the classifier) that are shape (batch_size, num_features) the best way is
or model = timm.create_model('resnet50')
model.reset_classifier(num_classes=0) # remove classifier (replace with nn.Identity()) after creation Then any What you proposed will work for some of the simpler models, but it will fail for many that have more than just a stand-alone global pool between the last feature layer and the classifier (a growing number of models). I am adding additional API model = timm.create_model('resnet50')
unpooled_features = model.forward_features(img)
embeddings = model.forward_head(unpool_features, pre_logits=True)
predictions = model.forward_head(unpooled_features) |
Beta Was this translation helpful? Give feedback.
-
Hey, thanks for the response and thanks for the awsome library as well :) For example, comparing (ViT, DEiT) to (ResNet, EfficientNet): vit_model = timm.create_model('vit_large_patch32_384', pretrained=True)
vit_img_featrues = vit_model.forward_features(img)
deit_model = timm.create_model('deit_base_patch16_384', pretrained=True)
deit_img_featrues = deit_model.forward_features(img) In ResNet & EfficientNet: resnet_model = timm.create_model('resnet101', num_classes=0, pretrained=True)
resnet_img_featrues = resnet_model.forward_features(img)
efficientnet_model = timm.create_model('efficientnet_b4', num_classes=0, pretrained=True)
efficientnet_model_features = efficientnet_model.forward_features(img) I know that the models are different and that can explain the the results difference, however, I am afraid I am just using it wrong: Am I using it correctly? Am I missing something? Thanks |
Beta Was this translation helpful? Give feedback.
@strugoeli there's actually some relevant discussion in this issue (feature request) #1141 .... but the creator of that issue was looking for both the embeddings and predictions.
If you just want the embeddings (the activations right before the classifier) that are shape (batch_size, num_features) the best way is
or
Then any
model(input)
will output the embeddingWhat you proposed will work for some of the simple…