Replies: 2 comments 3 replies
-
@LuisGF93 can you specifiy which models? if it's the |
Beta Was this translation helpful? Give feedback.
-
@LuisGF93 hmm, I have some onnx export and validation code in another repo, although it's likely out of date with current onnx / pytorch but may be some differences to look at, in the past it was easy to swap the gefnet factor for the timm one...
In yours you are using 224 which is not the res those models are eval with, you are also using bilinear for resize, the eval # are at 288x288 for that model, you should resize to 288 (no tuple, so that it does resize shortest edge), use bicubic interpolation, and center crop to square. Though that should be a 2-3% difference, not 10-15%. |
Beta Was this translation helpful? Give feedback.
-
Hello,
I've trained an efficientnet v2 model using this repository and then I've converted it to ONNX format.
Then I have run inference on my pth checkpoint over a set of images to see its accuracy and I tried to run inference over the same set of images with my ONNX converted model. There are no errors in the log, apparently everything goes fine with the conversion process and also with the inference processes, but the accuracy on my converted model is between a 10% and a 15% worse than the original.
Do you know where could be the problem?
Beta Was this translation helpful? Give feedback.
All reactions