You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@johnsutor I think we will prioritize to get the key features implemented first so that users can have a good experience to publish .pte models, and load cached one from hub. For connecting the exportable vision models to optimum, it will have to wait until after it. However, it would be super nice if you would like to contribute!
To run the .pte model using ExecuTorch runtime via pybind, you will need to implement a new modeling class similar to ExecuTorchModelForCausalLM for the vision tasks.
With step 1 & 2, you will be able to generate the pte models. Step 3, inference, can be done separately.
Following PR #35124, we will add support for vision transformer models that are suitable for on-device deployment
The text was updated successfully, but these errors were encountered: