Skip to content

[float8] add auto_filter_for_recipe to float8 #2410

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

danielvegamyhre
Copy link
Contributor

@danielvegamyhre danielvegamyhre commented Jun 18, 2025

Part of pytorch/torchtitan#1207

Problem

  • float8 rowwise + vanilla TP in torchtitan had flat perf with respect to bfloat.
  • RCA In float8 rowwise vanilla TP low throughput torchtitan#1207 found attention.wk and attention.wv layers were so small that float8 rowwise conversion resulted in significant slowdown (approx 40%) for those linears, thus the perf benefits from fp8 rowwise conversion on larger linears were nullified.
  • This is because the default filter_fqns for float8 model conversion are fine for the fp8 tensorwise recipe, but bad for the float8 rowwise recipe.

Solution

This has been a footgun for various users as well (including Poolside), so I created an "auto filter" (#2410) which automatically filters Linears for a given float8 recipe, by checking for the following criteria:

  1. dims not divisible by 16 (hardware requirement for float8)
  2. dim sizes below thresholds that will result in worse perf for that given recipe, using simple heuristics based on the linked recipe perf tables above.
  3. fqn matches one of the user defined filter_fqns

I integrated a PoC into torchtitan and the auto filter improved fp8 rowwise perf both local Llama3 8b run and Llama3 70b MAST run, compared to the default filter_fn we have now.

It prevents users from hitting this common footgun, while also preserving the flexibility to define their model-specific fqns.

Results

See pytorch/torchtitan#1207 for Llama3 70b results, TL;DR is filtering wk and wv improves TPS ~10% for vanilla TP and ~15% for async TP.

Copy link

pytorch-bot bot commented Jun 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2410

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit d448443 with merge base 101c039 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. float8 topic: new feature Use this tag if this PR adds a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants