Skip to content

Vit-Small/16 pre-training (IN 1K or 21K) ? #309

Answered by rwightman
aelnouby asked this question in Q&A
Discussion options

You must be logged in to vote

@aelnouby the converted weights are pretrained on ImageNet-21k and finetuned on ImageNet-1k by the Google group that wrote the paper / released official models.

My 'small' model def was a reduced size model I threw together that was more practical to train on 2xGPU setup than the official 'base' model. It was just trained on ImageNet-1k from scratch, but achieved results comparable to the official base model training results on pure ImageNet-1k mentioned in the paper. I used heavier augmentation.

Replies: 3 comments 2 replies

Comment options

You must be logged in to vote
1 reply
@mmoayeri
Comment options

Answer selected by aelnouby
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@knelk
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
4 participants