Skip to content

Add IntxUnpackedTensor #2732

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Add IntxUnpackedTensor #2732

wants to merge 4 commits into from

Conversation

metascroy
Copy link
Contributor

No description provided.

Copy link

pytorch-bot bot commented Aug 11, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2732

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 93948a4 with merge base 6cfa477 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@metascroy metascroy requested a review from jerryzh168 August 11, 2025 16:57
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 11, 2025
block_size: the block size for quantization, representing the granularity, for example groupwise quantization will have block_size (1, group_size)
"""

tensor_data_attrs = ["int_data", "scale", "zero_point"]
Copy link
Contributor

@jerryzh168 jerryzh168 Aug 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw if you update these to tensor_data_names and tensor_attribute_names you'll be able to remove some of the implementations, see docs in https://github.com/pytorch/ao/pull/2710/files#diff-d2a11602a79e83305208472f1abe6a4106f02ce62a7f9524007181813863fcf6R687, example: #2738

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can still override the behavior in TorchAOBaseTensor, right?

For example, it looks like aten._to_copy.default gets auto-populated, but I want to define its dtype variant in addition to device variant.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be working, I haven't actively tested this behavior though, I'll try to add a test for this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed

)

@classmethod
def from_float(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we are standardizing on from_hp now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does hp stand for?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high precision

@@ -30,3 +30,8 @@ class PackingFormat(str, Enum):
preshuffled is referring to the preshuffled format used by fbgemm kernels
"""
PRESHUFFLED = "preshuffled"

"""
Unpacked means the subbyte quantized data is stored as int8
Copy link
Contributor

@jerryzh168 jerryzh168 Aug 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this int only? we could be more specific and say UnpackedToInt8

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I can make the format UNPACKED_TO_INT8

@@ -2060,6 +2061,8 @@ class IntxWeightOnlyConfig(AOBaseConfig):
mapping_type: MappingType = MappingType.SYMMETRIC
scale_dtype: Optional[torch.dtype] = None
layout: Layout = QDQLayout()
packing_format: PackingFormat = PackingFormat.UNPACKED
VERSION: int = 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we updated the name to version

@metascroy metascroy added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label Aug 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing Use this tag if you don't want this PR to show up in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants