-
Notifications
You must be signed in to change notification settings - Fork 288
Some changes in inner-padding
option
#858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/858
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 9b6212f with merge base 8aa6533 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D62003827 |
This pull request was exported from Phabricator. Differential Revision: D62003827 |
0fdfe3e
to
9481221
Compare
Summary: Pull Request resolved: pytorch#858 Add test cases to verify that the compile of inner-padding works with the triton PR triton-lang/triton#4222. Before the triton PR, the inductor code-gen kernel fails at ``` tmp10 = tl.where(tmp6, tmp8, tmp9) TypeError: unexpected type fp8e5 and fp8e5 ``` Reviewed By: irobert0126 Differential Revision: D62003827
This pull request was exported from Phabricator. Differential Revision: D62003827 |
Summary: Pull Request resolved: pytorch#858 Add test cases to verify that the compile of inner-padding works with the triton PR triton-lang/triton#4222. Before the triton PR, the inductor code-gen kernel fails at ``` tmp10 = tl.where(tmp6, tmp8, tmp9) TypeError: unexpected type fp8e5 and fp8e5 ``` Reviewed By: irobert0126 Differential Revision: D62003827
9481221
to
9ce26f8
Compare
Summary: Pull Request resolved: pytorch#858 The diff modifies the `padding` option and added tests with `compile`: * For the scaled_mm of shape MxKxN, the current `inner_padding` option only pads the `K` dimension. However, if `N` is not divisible by 16, we also got the error ``` E RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasLtMatmulAlgoGetHeuristic( ltHandle, computeDesc.descriptor(), Adesc.descriptor(), Bdesc.descriptor(), Cdesc.descriptor(), Ddesc.descriptor(), preference.descriptor(), 1, &heuristicResult, &returnedResult)` ``` So, modified the pad_inner option to also pad the K dimensions. ----- * The compile of inner-padding only works with the triton PR triton-lang/triton#4222. Before the triton PR, the inductor code-gen kernel fails at ``` tmp10 = tl.where(tmp6, tmp8, tmp9) TypeError: unexpected type fp8e5 and fp8e5 ``` Reviewed By: irobert0126 Differential Revision: D62003827
This pull request was exported from Phabricator. Differential Revision: D62003827 |
9ce26f8
to
9b6212f
Compare
inner-padding
option
@@ -494,7 +494,7 @@ def test_different_configs_error(self): | |||
"base_dtype", [torch.float16, torch.bfloat16, torch.float32] | |||
) | |||
@pytest.mark.parametrize("use_fast_accum", [True, False]) | |||
def test_pad_inner_dim(self, base_dtype, use_fast_accum): | |||
def test_pad_dimensions(self, base_dtype, use_fast_accum): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would now need 3 cases:
- pad N only
- pad K only
- pad N and K
@@ -40,14 +40,25 @@ def _test_compile_base( | |||
fullgraph: bool, | |||
config: Float8LinearConfig, | |||
dtype: torch.dtype, | |||
pad_dimensions: bool, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same, I think we should test padding N, K, and N and K together
@@ -120,7 +120,7 @@ class Float8LinearConfig: | |||
# inner dimension of a (dim 1) and b (dim 2) with 0s. This is needed for matmuls | |||
# _scaled_mm since it has the strong constraint that for M,N,K N, K must be a multiple of 16. | |||
# This can cause a memory spike however so we keep this off by default. | |||
pad_inner_dim: bool = False | |||
pad_dimensions: bool = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: update docblock
|
||
return a_data, a_scale, b_data, b_scale, out_shape | ||
|
||
def postprocess_addmm(out: torch.Tensor, scaled_mm_config, out_shape): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: just inline instead of creating a new function? it's only two lines of code and used once
overall looks great! Can we have two more things:
|
About the test failures:
From the error traces, it keeps calling the below functions for multiple times until exceeded the maximum recursion depth:
It's expected - the compile of inner-padding only works if the triton package contains this PR triton-lang/triton#4222.
cc @vkuzo do you have any suggestions? thanks! |
can you check if that PR is in triton 3.1.0? https://github.com/pytorch/pytorch/blob/main/.ci/docker/triton_version.txt is the current triton version in OSS PyTorch. |
@vkuzo , it's not included in triton 3.1. The pr (#4222) shows in the difference between 3.1.x and main: triton-lang/triton@release/3.1.x...main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think this is good, just also update the docs with the new behavior
Followup after pytorch/torchchat#815 to unblock migration to a newer version of PyTorch where AOTI seems to lost ability error out when one attempts to load CPU model on GPU, see https://github.com/pytorch/torchchat/actions/runs/9391753397/job/25913830802 for example Workaround by adding `-d ${DEVICE}` option to `aoti_runner`
Summary:
The diff modifies the
padding
option and added tests withcompile
:inner_padding
option only pads theK
dimension. However, ifN
is not divisible by 16, we also got the errorSo, modified the pad_inner option to also pad the N dimensions.
Before the triton PR, the inductor code-gen kernel fails at
Reviewed By: irobert0126
Differential Revision: D62003827