-
Hi. Does |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
Models that...
|
Beta Was this translation helpful? Give feedback.
-
Which are the ViTs that are only trained on ImageNet 1K? Of these: This one is just 1K right (but is not from Google): 'vit_base_patch16_224_miil'. As above, 'vit_base_patch16_224' this is actually 21K finetuned on 1K? |
Beta Was this translation helpful? Give feedback.
@sayakpaul
Models that...
vit_
and end with_in21k
were trained on imagenet-21k and not fine-tuned, their classification heads were zero'd by google researchers so they don't work for 21k but can be fine tuned for other tasks (they have weights for the pre-logits that other models don't). They are always 224x224.vit_
and havejx_
in the beginning of their weights were also trained by google and they are the ones that were pretrained on ImageNet-21k and fine-tuned on ImageNet-1k. They are 224 or 384.deit_
are the FB trained models that were trained on ImageNet-1k w and w/o distillation (based on model name)