Using pretrained VIT on MAE #1115
Unanswered
Songloading
asked this question in
General
Replies: 1 comment
-
@Songloading the official MAE impl (https://github.com/facebookresearch/mae) uses lucidrains impl is intended to work with his vit implementation |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi There,
I am currently trying to apply pretrained VIT models on Masked Autoencoders (MAE). I was trying to follow the MAE implementations in https://github.com/lucidrains/vit-pytorch. However, when I do:
I got
which implies the implementations are not compatible. Is there any way I can fix this without changing internal implementations of the MAE or is there any other MAE implementations in pytorch that can be directly used for pretrained VITs. I've asked the same question at their repo, but it seems they had no answer for this. Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions