Skip to content

Fused quant linear kernel (#19490)#19490

Open
DrJessop wants to merge 3 commits into
pytorch:mainfrom
DrJessop:export-D103754853
Open

Fused quant linear kernel (#19490)#19490
DrJessop wants to merge 3 commits into
pytorch:mainfrom
DrJessop:export-D103754853

Conversation

@DrJessop
Copy link
Copy Markdown
Contributor

@DrJessop DrJessop commented May 11, 2026

Summary:

Fused quant linear kernel (out = inp @ weight^T + bias) with optional dequantize/quantize. Supports 4 sets of qparams (inp, weight, bias, out), optional bias, and per-tensor/per-channel quantization.

Reviewed By: mvartani-meta

Differential Revision: D103754853

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented May 11, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19490

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

⏳ 8 Pending, 2 Unrelated Failures

As of commit c987558 with merge base 8020fe0 (image):

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 11, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented May 11, 2026

@DrJessop has exported this pull request. If you are a Meta employee, you can view the originating Diff in D103754853.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@meta-codesync meta-codesync Bot changed the title Fused quant linear kernel Fused quant linear kernel (#19490) May 12, 2026
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request May 12, 2026
Summary:

Fused quant linear kernel (out = inp @ weight^T + bias) with optional dequantize/quantize. Supports 4 sets of qparams (inp, weight, bias, out), optional bias, and per-tensor/per-channel quantization.

Reviewed By: mvartani-meta

Differential Revision: D103754853
@DrJessop DrJessop force-pushed the export-D103754853 branch from c57549a to 1d27ab1 Compare May 12, 2026 04:22
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request May 12, 2026
Summary:

Fused quant linear kernel (out = inp @ weight^T + bias) with optional dequantize/quantize. Supports 4 sets of qparams (inp, weight, bias, out), optional bias, and per-tensor/per-channel quantization.

Reviewed By: mvartani-meta

Differential Revision: D103754853
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request May 12, 2026
Summary:

Fused quant linear kernel (out = inp @ weight^T + bias) with optional dequantize/quantize. Supports 4 sets of qparams (inp, weight, bias, out), optional bias, and per-tensor/per-channel quantization.

Reviewed By: mvartani-meta

Differential Revision: D103754853
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request May 12, 2026
Summary:

Fused quant linear kernel (out = inp @ weight^T + bias) with optional dequantize/quantize. Supports 4 sets of qparams (inp, weight, bias, out), optional bias, and per-tensor/per-channel quantization.

Reviewed By: mvartani-meta

Differential Revision: D103754853
Andrew Grebenisan added 3 commits May 12, 2026 10:07
Summary:

Fused quant hardswish kernel with optional dequantize/quantize. Unary op that applies x * min(max(x+3, 0), 6) / 6. Supports per-tensor and per-channel quantization.

Reviewed By: mvartani-meta

Differential Revision: D103754780
Summary:

Fused quant batch matrix multiply kernel with optional dequantize/quantize. Binary op on 3D tensors [B,M,K] x [B,K,N] -> [B,M,N]. Supports per-tensor and per-channel quantization.

Reviewed By: mvartani-meta

Differential Revision: D103754815
Summary:

Fused quant linear kernel (out = inp @ weight^T + bias) with optional dequantize/quantize. Supports 4 sets of qparams (inp, weight, bias, out), optional bias, and per-tensor/per-channel quantization.

Reviewed By: mvartani-meta

Differential Revision: D103754853
@DrJessop DrJessop force-pushed the export-D103754853 branch from 1d27ab1 to c987558 Compare May 12, 2026 17:08
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request May 12, 2026
Summary:

Fused quant linear kernel (out = inp @ weight^T + bias) with optional dequantize/quantize. Supports 4 sets of qparams (inp, weight, bias, out), optional bias, and per-tensor/per-channel quantization.

Reviewed By: mvartani-meta

Differential Revision: D103754853
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants