Skip to content

mxfp8 training: add TP sharding strategy for dim1 kernel #2436

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 24 commits into
base: main
Choose a base branch
from

Conversation

vkuzo
Copy link
Contributor

@vkuzo vkuzo commented Jun 24, 2025

Summary:

Enables mxfp8 training with the dim1 triton kernel and TP, in eager mode. In detail:

  1. write a sharding strategy for the triton kernel custom op
  2. properly wrap the outputs of the custom op in DTensor
  3. add a test which enables the triton custom op, and modify the test model sizes since this kernel needs block_size 32

Note that compile does not work yet, seeing https://www.internalfb.com/phabricator/paste/view/P1851219639

Test Plan:

// TP test passes (with eager mode)
./test/prototype/mx_formats/test_mx_dtensor.sh

// unit tests still pass
pytest test/prototype/mx_formats

// torchtitan debug model loss goes down (in eager)
with-proxy CONFIG_FILE="torchtitan/models/llama3/train_configs/debug_model.toml " ./run_train.sh --model.print_after_conversion --training.steps 50 --model.converters mx --mx.recipe_name "mxfp8" --parallelism.tensor_parallel_degree=2 --mx.use_fp8_dim1_cast_triton_kernel

Reviewers:

Subscribers:

Tasks:

Tags:

vkuzo added 22 commits June 20, 2025 07:10
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
@vkuzo
Copy link
Contributor Author

vkuzo commented Jun 24, 2025

Stack from ghstack (oldest at bottom):

Copy link

pytorch-bot bot commented Jun 24, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2436

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit dc0803c with merge base 32599be (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 24, 2025
vkuzo added a commit that referenced this pull request Jun 24, 2025
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 3b7f85d
ghstack-comment-id: 3001601112
Pull Request resolved: #2436
[ghstack-poisoned]
@vkuzo vkuzo changed the base branch from gh/vkuzo/91/head to main June 24, 2025 19:27
vkuzo added a commit that referenced this pull request Jun 24, 2025
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: ddb1f80
ghstack-comment-id: 3001601112
Pull Request resolved: #2436
[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Jun 25, 2025
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: b229983
ghstack-comment-id: 3001601112
Pull Request resolved: #2436
@vkuzo vkuzo changed the title [wip] sharding strategy for dim1 kernel mxfp8 training: add TP sharding strategy for dim1 kernel Jun 25, 2025
)
# TODO(future PR): enable compile here, currently seeing
# https://www.internalfb.com/phabricator/paste/view/P1851219639
# _test_lowp_mlp_tensor_parallelism_base(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to uncomment this and run ./test/prototype/mx_formats/test_mx_dtensor.sh to reproduce

@vkuzo vkuzo added the topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories) label Jun 25, 2025
@vkuzo vkuzo requested review from drisspg and danielvegamyhre June 27, 2025 13:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants