-
Notifications
You must be signed in to change notification settings - Fork 712
Arm backend: Propagate node info from quantizer to backend #15300
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Arm backend: Propagate node info from quantizer to backend #15300
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15300
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New Failures, 6 Unrelated FailuresAs of commit b3c3da5 with merge base 747fc6f ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| node.meta[Q_ANNOTATION_KEY]._annotated = True | ||
| meta_custom = node.meta.get("custom", {}) | ||
| meta_custom[ArmAnnotationInfo.CUSTOM_META_KEY] = annotation_info | ||
| node.meta["custom"] = meta_custom |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@digantdesai What are your thoughts on this conceptually?
|
@digantdesai I have a feeling the MCU backend will need something similar to this since mixed int/fp graphs will likely be more common there. |
|
Adding @SS-JIA as @digantdesai is out of office. |
c9f432e to
ef42e7f
Compare
|
Apologies for the delay, I will take a look at this PR tomorrow |
A number of ops only handles shape/meta-data without changing the dynamic range. In these cases, no rescaling needs to be performed and the int8 portable_ops kernel can be used directly. A new test is added to ensure this behaviour, as well as a test showing how operators which does change the dynamic range (SUB) are not supported. To support quantization of graphs with no-rescale ops in the beginning/ end of the graph, two new quantizers InputQuantizer and OutputQuantizer are introduced. By explicitly stating the dtype of the input/output, no-rescale ops inherit dtypes from them as with any other op. This change exposes the issue of mixing dtypes within the graph, which adds back xfails for the broadcasted add and mul tests. This can be fixed in a future patch after pytorch#15300 is resolved. Signed-off-by: Adrian Lundell <[email protected]> Change-Id: I8f79b86b633f9ad8d9f183c914754b0ee2f7a87c
SS-JIA
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Use the Node meta 'custom' field to propagate information from quantizer to partitioner using a new ArmAnnotationInfo data class. This allows us to track quantized node reliably which is useful in order to track which nodes should 'fold' it's quantization parameter and which should be kept in fp when mixing integer and float in a sub-graph. Co-authored-by: Per Åstrand <[email protected]> Signed-off-by: Oscar Andersson <[email protected]> Change-Id: I31309d65cac50e497318eae8678880684ec77cda
ef42e7f to
b3c3da5
Compare
Use the Node meta 'custom' field to propagate information from quantizer to partitioner using a new ArmAnnotationInfo data class. This allows us to track quantized node reliably which is useful in order to track which nodes should 'fold' it's quantization parameter and which should be kept in fp when mixing integer and float in a sub-graph.
cc @freddan80 @per @zingo @digantdesai