Skip to content

Some changes in inner-padding option #858

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

y-sq
Copy link
Contributor

@y-sq y-sq commented Sep 9, 2024

Summary:
The diff modifies the padding option and added tests with compile:

  • For the scaled_mm of shape MxKxN, the current inner_padding option only pads the K dimension. However, if N is not divisible by 16, we also got the error
E       RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasLtMatmulAlgoGetHeuristic( ltHandle, computeDesc.descriptor(), Adesc.descriptor(), Bdesc.descriptor(), Cdesc.descriptor(), Ddesc.descriptor(), preference.descriptor(), 1, &heuristicResult, &returnedResult)`

So, modified the pad_inner option to also pad the N dimensions.


Before the triton PR, the inductor code-gen kernel fails at

tmp10 = tl.where(tmp6, tmp8, tmp9)

TypeError: unexpected type fp8e5 and fp8e5

Reviewed By: irobert0126

Differential Revision: D62003827

Copy link

pytorch-bot bot commented Sep 9, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/858

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 9b6212f with merge base 8aa6533 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 9, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D62003827

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D62003827

y-sq added a commit to y-sq/ao that referenced this pull request Sep 10, 2024
Summary:
Pull Request resolved: pytorch#858

Add test cases to verify that the compile of inner-padding works with the triton PR triton-lang/triton#4222.

Before the triton PR, the inductor code-gen kernel fails at
```
tmp10 = tl.where(tmp6, tmp8, tmp9)

TypeError: unexpected type fp8e5 and fp8e5
```

Reviewed By: irobert0126

Differential Revision: D62003827
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D62003827

y-sq added a commit to y-sq/ao that referenced this pull request Sep 10, 2024
Summary:
Pull Request resolved: pytorch#858

Add test cases to verify that the compile of inner-padding works with the triton PR triton-lang/triton#4222.

Before the triton PR, the inductor code-gen kernel fails at
```
tmp10 = tl.where(tmp6, tmp8, tmp9)

TypeError: unexpected type fp8e5 and fp8e5
```

Reviewed By: irobert0126

Differential Revision: D62003827
Summary:
Pull Request resolved: pytorch#858

The diff modifies the `padding` option and added tests with `compile`:

* For the scaled_mm of shape MxKxN, the current `inner_padding` option only pads the `K` dimension. However, if `N` is not divisible by 16, we also got the error
```
E       RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasLtMatmulAlgoGetHeuristic( ltHandle, computeDesc.descriptor(), Adesc.descriptor(), Bdesc.descriptor(), Cdesc.descriptor(), Ddesc.descriptor(), preference.descriptor(), 1, &heuristicResult, &returnedResult)`
```
So, modified the pad_inner option to also pad the K dimensions.

-----
* The compile of inner-padding only works with the triton PR triton-lang/triton#4222.

Before the triton PR, the inductor code-gen kernel fails at
```
tmp10 = tl.where(tmp6, tmp8, tmp9)

TypeError: unexpected type fp8e5 and fp8e5
```

Reviewed By: irobert0126

Differential Revision: D62003827
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D62003827

@y-sq y-sq changed the title Test compile with inner-padding Some changes in inner-padding option Sep 16, 2024
@@ -494,7 +494,7 @@ def test_different_configs_error(self):
"base_dtype", [torch.float16, torch.bfloat16, torch.float32]
)
@pytest.mark.parametrize("use_fast_accum", [True, False])
def test_pad_inner_dim(self, base_dtype, use_fast_accum):
def test_pad_dimensions(self, base_dtype, use_fast_accum):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would now need 3 cases:

  • pad N only
  • pad K only
  • pad N and K

@@ -40,14 +40,25 @@ def _test_compile_base(
fullgraph: bool,
config: Float8LinearConfig,
dtype: torch.dtype,
pad_dimensions: bool,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same, I think we should test padding N, K, and N and K together

@@ -120,7 +120,7 @@ class Float8LinearConfig:
# inner dimension of a (dim 1) and b (dim 2) with 0s. This is needed for matmuls
# _scaled_mm since it has the strong constraint that for M,N,K N, K must be a multiple of 16.
# This can cause a memory spike however so we keep this off by default.
pad_inner_dim: bool = False
pad_dimensions: bool = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: update docblock


return a_data, a_scale, b_data, b_scale, out_shape

def postprocess_addmm(out: torch.Tensor, scaled_mm_config, out_shape):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: just inline instead of creating a new function? it's only two lines of code and used once

@vkuzo
Copy link
Contributor

vkuzo commented Sep 16, 2024

overall looks great! Can we have two more things:

  1. update https://github.com/pytorch/ao/blob/main/benchmarks/float8/bench_padding.py and share data on how this PR impacts performance
  2. mark this PR as BC-breaking and add the before and after state to the PR summary

@y-sq
Copy link
Contributor Author

y-sq commented Sep 16, 2024

About the test failures:

  1. Some aot_eager tests failed due to:
  E       RecursionError: maximum recursion depth exceeded while calling a Python object
  
  /opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py:309: RecursionError
  _ test_aot_eager[dtype2-True-ScalingType.DYNAMIC-ScalingType.DELAYED-ScalingType.STATIC-True-True] _

From the error traces, it keeps calling the below functions for multiple times until exceeded the maximum recursion depth:

/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/backends/common.py:51: in _wrapped_bw_compiler
      return disable(disable(bw_compiler)(*args, **kwargs))
  /opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py:632: in _fn
      return fn(*args, **kwargs)
  /opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/backends/common.py:51: in _wrapped_bw_compiler
      return disable(disable(bw_compiler)(*args, **kwargs))
  /opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py:632: in _fn
      return fn(*args, **kwargs)
  /opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/backends/common.py:51: in _wrapped_bw_compiler
      return disable(disable(bw_compiler)(*args, **kwargs))
  /opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py:632: in _fn
  • I'm still debugging this error.
  1. test_inductor tests failed at
tmp10 = tl.where(tmp6, tmp8, tmp9)

TypeError: unexpected type fp8e5 and fp8e5

It's expected - the compile of inner-padding only works if the triton package contains this PR triton-lang/triton#4222.

  • How should we add such tests in github CI?

cc @vkuzo do you have any suggestions? thanks!

@vkuzo
Copy link
Contributor

vkuzo commented Sep 16, 2024

It's expected - the compile of inner-padding only works if the triton package contains this PR triton-lang/triton#4222

can you check if that PR is in triton 3.1.0? https://github.com/pytorch/pytorch/blob/main/.ci/docker/triton_version.txt is the current triton version in OSS PyTorch.

@y-sq
Copy link
Contributor Author

y-sq commented Sep 16, 2024

@vkuzo , it's not included in triton 3.1. The pr (#4222) shows in the difference between 3.1.x and main: triton-lang/triton@release/3.1.x...main

@y-sq y-sq requested a review from drisspg October 2, 2024 05:59
Copy link
Contributor

@drisspg drisspg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think this is good, just also update the docs with the new behavior

yanbing-j pushed a commit to yanbing-j/ao that referenced this pull request Dec 9, 2024
Followup after pytorch/torchchat#815 to unblock migration to a newer version of PyTorch where AOTI seems to lost ability error out when one attempts to load CPU model on GPU, see https://github.com/pytorch/torchchat/actions/runs/9391753397/job/25913830802 for example

Workaround by adding `-d ${DEVICE}` option to `aoti_runner`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants