-
Notifications
You must be signed in to change notification settings - Fork 668
Arm backend: Support channels-last input and output #14259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Arm backend: Support channels-last input and output #14259
Conversation
Summary: Pull Request resolved: pytorch#14191 Redoing pytorch#14111 with additional fixes Reviewed By: digantdesai Differential Revision: D82171193
- Insert transposes for input/output iff the incoming/outgoing data is in channels first format. - For testing using tosa_reference_mode, transpose numpy arrays to and from correct data format since numpy doesn't have the concept of dim_order. - Remove checks for channels_first only input. - Remove check for not changing dim_order before to_tosa_memory_format pass since the behaviour of channel last tensors is non-predictable. - Add dim order testing of example networks and mv2 - Add a section to the documentation about memory formats. Signed-off-by: Adrian Lundell <[email protected]> Change-Id: I05548b9f3b4671da6faad90a9dd7366fda4498d6
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14259
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New Failures, 1 Unrelated FailureAs of commit adec5aa with merge base cf6e895 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@mergennachin has imported this pull request. If you are a Meta employee, you can view this in D82449155. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review automatically exported from Phabricator review in Meta.
@@ -249,15 +249,6 @@ class EthosUBackend final : public ::executorch::runtime::BackendInterface { | |||
handles.inputs->io[i].elem_size); | |||
return Error::InvalidProgram; | |||
} | |||
supported = executorch::runtime::is_contiguous_dim_order( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this implies it can handle anything i.e. transpose op is inserted if it was needed. But what about asserting expectations. I.e. if user exported with NCHW and we inserted a transpose_to_nhwc AoT, what if now user supplied NHWC (instead of assumed NCHW), shouldn't we validate since we don't "check and optionally transpose" at runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point and I agree, however since we are past the branch cutoff date and we need this patch to unblock a major use case for us, may I ask to ignore this for now and fix this in a later PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No worries. I assumed that and stamped already :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the stamp mean we can merge, or do we still wait och Meta to merge?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
haha good I was in the non-GA mode already where stamp --> you merge --> Internal failure --> we revert
But looking at the activity from @mergennachin he is, rightfully, still in GA mental mode for this GA critical PR. So if the internal CI is clean, he or I can merge this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, Im happy and you handle it fast so no problem, I just want to avoid an n"o one does it" situation 😆 as there might be a merge/sync to 1.0 coming up.
And I also dont expect any PR not 1.0 milestone tagged to be be merged. If so its just a bonus.
Signed-off-by: Adrian Lundell <[email protected]>
Failing arm-backend test is a very small numerical diff for Other failures are non-arm specific. |
@mergennachin has imported this pull request. If you are a Meta employee, you can view this in D82449155. |
The new tests are failing. Here's an example error message:
|
@mergennachin The only thing that I can come up with that would cause this is that you need to update the way you run the tests internally to match the changes made in this PR. Would it be OK to disable the tests for now and put a ticket on you to fix this, or do you have any other solutions you prefer? |
yeah likely not a problem with this PR. Let me take a look. |
in channels first format.
from correct data format since numpy doesn't have the concept of
dim_order.
pass since the behaviour of channel last tensors is non-predictable.
cc @digantdesai @freddan80 @per @zingo @oscarandersson8218