Skip to content

[Tosa] : Match accumulator type with torch for lowering aten.mm to tosa.matmul #4264

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jul 17, 2025

Conversation

sahas3
Copy link
Member

@sahas3 sahas3 commented Jul 9, 2025

For i8 input, accumulator was being set to i8 as well which produces invalid tosa.matmul op. fp16 input used fp16 accumulator which while valid by tosa spec, fails numerical verification. This change uses the accumulator type as used by PyTorch (tosa-to-linalg does the same).

@sahas3 sahas3 marked this pull request as ready for review July 9, 2025 20:03
@sahas3 sahas3 requested a review from sjarus July 9, 2025 20:03
@sahas3 sahas3 force-pushed the tosaMatmuli8 branch 2 times, most recently from a5ec231 to 894b05b Compare July 14, 2025 15:49
Copy link
Collaborator

@sjarus sjarus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this fix !

@sahas3 sahas3 merged commit 0bf6e9c into llvm:main Jul 17, 2025
3 checks passed
Lallapallooza pushed a commit to Lallapallooza/torch-mlir that referenced this pull request Jul 17, 2025
…sa.matmul (llvm#4264)

For `i8` input, accumulator was being set to `i8` as well which produces
invalid `tosa.matmul` op. `fp16` input used `fp16` accumulator which
while valid by tosa spec, fails numerical verification. This change uses
the accumulator type as used by PyTorch (tosa-to-linalg does the same).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants