Skip to content

Add round_scales_to_power_of_2 option for float quantization #2323

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

drisspg
Copy link
Contributor

@drisspg drisspg commented Jun 6, 2025

Stacked PRs:


Add round_scales_to_power_of_2 option for float quantization

This adds support for rounding scaling factors down to the nearest power of 2
for float quantization, following the pattern established in Float8LinearConfig.

Key changes:

  • Add round_scales_to_power_of_2 parameter to all float quantization configs
  • Update choose_qparams_affine_floatx and to_scaled_tc_floatx functions to apply power of 2 rounding
  • Thread the parameter through all relevant function calls in quant_api.py

Lets users who train with this setting run in inference

Copy link

pytorch-bot bot commented Jun 6, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2323

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 4d77255 with merge base 282d04f (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

drisspg added a commit that referenced this pull request Jun 6, 2025
This adds support for rounding scaling factors down to the nearest power of 2
for float quantization, following the pattern established in Float8LinearConfig.

Key changes:
- Add round_scales_to_power_of_2 parameter to all float quantization configs
- Update choose_qparams_affine_floatx and to_scaled_tc_floatx functions to apply power of 2 rounding
- Thread the parameter through all relevant function calls in quant_api.py
- Maintain backward compatibility with default value of False

This helps reduce quantization error by avoiding rounding errors when
multiplying/dividing by scaling factors and ensures consistent quantization
between forward and backward passes.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>

stack-info: PR: #2323, branch: drisspg/stack/67
@drisspg drisspg force-pushed the drisspg/stack/67 branch from a55a119 to 988714b Compare June 6, 2025 01:00
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 6, 2025
@drisspg drisspg added the float8 label Jun 6, 2025
drisspg added a commit that referenced this pull request Jun 6, 2025
This adds support for rounding scaling factors down to the nearest power of 2
for float quantization, following the pattern established in Float8LinearConfig.

Key changes:
- Add round_scales_to_power_of_2 parameter to all float quantization configs
- Update choose_qparams_affine_floatx and to_scaled_tc_floatx functions to apply power of 2 rounding
- Thread the parameter through all relevant function calls in quant_api.py
- Maintain backward compatibility with default value of False

This helps reduce quantization error by avoiding rounding errors when
multiplying/dividing by scaling factors and ensures consistent quantization
between forward and backward passes.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>

stack-info: PR: #2323, branch: drisspg/stack/67
@drisspg drisspg force-pushed the drisspg/stack/67 branch from 988714b to 986924c Compare June 6, 2025 01:04
@drisspg drisspg added the topic: new feature Use this tag if this PR adds a new feature label Jun 6, 2025
drisspg added a commit that referenced this pull request Jun 6, 2025
This adds support for rounding scaling factors down to the nearest power of 2
for float quantization, following the pattern established in Float8LinearConfig.

Key changes:
- Add round_scales_to_power_of_2 parameter to all float quantization configs
- Update choose_qparams_affine_floatx and to_scaled_tc_floatx functions to apply power of 2 rounding
- Thread the parameter through all relevant function calls in quant_api.py
- Maintain backward compatibility with default value of False

This helps reduce quantization error by avoiding rounding errors when
multiplying/dividing by scaling factors and ensures consistent quantization
between forward and backward passes.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>

stack-info: PR: #2323, branch: drisspg/stack/67
@drisspg drisspg force-pushed the drisspg/stack/67 branch 2 times, most recently from 80c391c to 2902ef7 Compare June 6, 2025 17:48
drisspg added a commit that referenced this pull request Jun 6, 2025
This adds support for rounding scaling factors down to the nearest power of 2
for float quantization, following the pattern established in Float8LinearConfig.

Key changes:
- Add round_scales_to_power_of_2 parameter to all float quantization configs
- Update choose_qparams_affine_floatx and to_scaled_tc_floatx functions to apply power of 2 rounding
- Thread the parameter through all relevant function calls in quant_api.py
- Maintain backward compatibility with default value of False

This helps reduce quantization error by avoiding rounding errors when
multiplying/dividing by scaling factors and ensures consistent quantization
between forward and backward passes.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>

stack-info: PR: #2323, branch: drisspg/stack/67
drisspg added a commit that referenced this pull request Jun 7, 2025
This adds support for rounding scaling factors down to the nearest power of 2
for float quantization, following the pattern established in Float8LinearConfig.

Key changes:
- Add round_scales_to_power_of_2 parameter to all float quantization configs
- Update choose_qparams_affine_floatx and to_scaled_tc_floatx functions to apply power of 2 rounding
- Thread the parameter through all relevant function calls in quant_api.py
- Maintain backward compatibility with default value of False

This helps reduce quantization error by avoiding rounding errors when
multiplying/dividing by scaling factors and ensures consistent quantization
between forward and backward passes.

stack-info: PR: #2323, branch: drisspg/stack/67
@drisspg drisspg force-pushed the drisspg/stack/67 branch from 2902ef7 to 26400e5 Compare June 7, 2025 22:30
drisspg added a commit that referenced this pull request Jun 7, 2025
This adds support for rounding scaling factors down to the nearest power of 2
for float quantization, following the pattern established in Float8LinearConfig.

Key changes:
- Add round_scales_to_power_of_2 parameter to all float quantization configs
- Update choose_qparams_affine_floatx and to_scaled_tc_floatx functions to apply power of 2 rounding
- Thread the parameter through all relevant function calls in quant_api.py
- Maintain backward compatibility with default value of False

This helps reduce quantization error by avoiding rounding errors when
multiplying/dividing by scaling factors and ensures consistent quantization
between forward and backward passes.

stack-info: PR: #2323, branch: drisspg/stack/67
@drisspg drisspg force-pushed the drisspg/stack/67 branch from 26400e5 to 3d69b0a Compare June 7, 2025 23:08
drisspg added a commit that referenced this pull request Jun 7, 2025
This adds support for rounding scaling factors down to the nearest power of 2
for float quantization, following the pattern established in Float8LinearConfig.

Key changes:
- Add round_scales_to_power_of_2 parameter to all float quantization configs
- Update choose_qparams_affine_floatx and to_scaled_tc_floatx functions to apply power of 2 rounding
- Thread the parameter through all relevant function calls in quant_api.py
- Maintain backward compatibility with default value of False

This helps reduce quantization error by avoiding rounding errors when
multiplying/dividing by scaling factors and ensures consistent quantization
between forward and backward passes.

stack-info: PR: #2323, branch: drisspg/stack/67
@drisspg drisspg force-pushed the drisspg/stack/67 branch from 3d69b0a to 12ae981 Compare June 7, 2025 23:17
drisspg added a commit that referenced this pull request Jun 7, 2025
This adds support for rounding scaling factors down to the nearest power of 2
for float quantization, following the pattern established in Float8LinearConfig.

Key changes:
- Add round_scales_to_power_of_2 parameter to all float quantization configs
- Update choose_qparams_affine_floatx and to_scaled_tc_floatx functions to apply power of 2 rounding
- Thread the parameter through all relevant function calls in quant_api.py
- Maintain backward compatibility with default value of False

This helps reduce quantization error by avoiding rounding errors when
multiplying/dividing by scaling factors and ensures consistent quantization
between forward and backward passes.

stack-info: PR: #2323, branch: drisspg/stack/67
@drisspg drisspg force-pushed the drisspg/stack/67 branch from 12ae981 to 195187b Compare June 7, 2025 23:24
@drisspg drisspg requested a review from danielvegamyhre June 7, 2025 23:50
Copy link
Contributor

@danielvegamyhre danielvegamyhre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, CI not happy yet though


config = config_factory()
if isinstance(
config, Float8DynamicActivationFloat8SemiSparseWeightConfig
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the min_sm check above already handle this? Or is this because the cutlass kernel is only built on sm90 / 90a?

Test case for Float8SemiSparse is failing on h100 due to CUDA backend not supporting operator (kernel not built?) so just wondering

drisspg added a commit that referenced this pull request Jun 11, 2025
This adds support for rounding scaling factors down to the nearest power of 2
for float quantization, following the pattern established in Float8LinearConfig.

Key changes:
- Add round_scales_to_power_of_2 parameter to all float quantization configs
- Update choose_qparams_affine_floatx and to_scaled_tc_floatx functions to apply power of 2 rounding
- Thread the parameter through all relevant function calls in quant_api.py
- Maintain backward compatibility with default value of False

This helps reduce quantization error by avoiding rounding errors when
multiplying/dividing by scaling factors and ensures consistent quantization
between forward and backward passes.

stack-info: PR: #2323, branch: drisspg/stack/67
@drisspg drisspg force-pushed the drisspg/stack/67 branch from 195187b to 9ed84e7 Compare June 11, 2025 18:21
drisspg added a commit that referenced this pull request Jun 16, 2025
This adds support for rounding scaling factors down to the nearest power of 2
for float quantization, following the pattern established in Float8LinearConfig.

Key changes:
- Add round_scales_to_power_of_2 parameter to all float quantization configs
- Update choose_qparams_affine_floatx and to_scaled_tc_floatx functions to apply power of 2 rounding
- Thread the parameter through all relevant function calls in quant_api.py
- Maintain backward compatibility with default value of False

This helps reduce quantization error by avoiding rounding errors when
multiplying/dividing by scaling factors and ensures consistent quantization
between forward and backward passes.

stack-info: PR: #2323, branch: drisspg/stack/67
@drisspg drisspg force-pushed the drisspg/stack/67 branch from 9ed84e7 to 4bb19d6 Compare June 16, 2025 16:39
@drisspg drisspg mentioned this pull request Jun 18, 2025
@drisspg drisspg force-pushed the drisspg/stack/67 branch 3 times, most recently from 511db54 to df31f6d Compare June 18, 2025 02:49
@drisspg drisspg force-pushed the drisspg/stack/67 branch from df31f6d to 5f784eb Compare June 18, 2025 02:58
This adds support for rounding scaling factors down to the nearest power of 2
for float quantization, following the pattern established in Float8LinearConfig.

Key changes:
- Add round_scales_to_power_of_2 parameter to all float quantization configs
- Update choose_qparams_affine_floatx and to_scaled_tc_floatx functions to apply power of 2 rounding
- Thread the parameter through all relevant function calls in quant_api.py
- Maintain backward compatibility with default value of False

This helps reduce quantization error by avoiding rounding errors when
multiplying/dividing by scaling factors and ensures consistent quantization
between forward and backward passes.

stack-info: PR: #2323, branch: drisspg/stack/67
@drisspg drisspg force-pushed the drisspg/stack/67 branch from 5f784eb to 4d77255 Compare June 18, 2025 18:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. float8 topic: new feature Use this tag if this PR adds a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants