Skip to content

NVfp4 #2408

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

NVfp4 #2408

wants to merge 1 commit into from

Conversation

drisspg
Copy link
Contributor

@drisspg drisspg commented Jun 18, 2025

Stacked PRs:


Add NVFP4 Inference flow

Details:

I kept this separate for MX but realistically we should probably merge the two. Basic support for blocksize 16 + e4m3 scales.

Double Quant Update

Ignore previous comments, the double quant is actually really similar to NF4 where you just scale the fp32 scales prior to casting to e4m3 to try and reduce scale quant error.

I have that implemented now in the Nvfp4 code if a tesor_scale is given, just need to figure out how to thread to cublas param scale_in_d or how we want to expose this. We currently don't expose the C matrix to the Python API so we could use alpha as @gau-nernst pointed out to me, however we dont expose alpha either 🙃. However if we wanted to use alpha we would need the value on the host, the sync would likely rule out this option. I might keep this double quant on hold until we have the public api, since I am thinking about adding scale overloads to addmm. However I read the cublas docs many times and it feels as though passing to scale result should work since we don't set the d_mode and its default value should work.

Early Perf

No double quant here

python /home/drisspg/meta/vllm/benchmarks/benchmark_throughput.py \
 --backend vllm \
 --model "data/nvfp4-Qwen3-8B" \
 --dataset-name sharegpt \
 --dataset-path data/ShareGPT_V3_unfiltered_cleaned_split.json \
 --num-prompts 1024 \
 --disable-log-stats \
 --gpu-memory-utilization=0.9 \
 --seed 42
Throughput: 43.23 requests/s, 18347.24 total tokens/s, 8840.47 output tokens/s
Total num prompt tokens:  225190
Total num output tokens:  209407

which is even worse than mxfp4..., will profile later

Micro Bench

LLama 70B mlp no TP:

Model Configuration Runtime (μs/iteration) Speedup vs BF16
BF16 1353.09 1.00x
mxfp8 766.76 1.76x
mxfp4 638.00 2.12x
nvfp4 540.41 2.50x

Diffusers

# Bf16 Compile
|           ckpt_id            |   batch_size |  fuse  |  compile  |  compile_vae  |  quantization  |  sparsify  |   model_memory |   inference_memory |   time |
|:----------------------------:|-------------:|:------:|:---------:|:-------------:|:--------------:|:----------:|---------------:|-------------------:|-------:|
| black-forest-labs/FLUX.1-dev |            1 | False  |   True    |     False     |      None      |   False    |         31.438 |             33.827 |  3.286 |

Errors

Annoyingly we are getting an error due to the view as fp4x2 + packing https://fburl.com/cd92w431 because this is trying to be bitcast iside inside triton kernel which is very annoying. Not sure how this didn't show up until vllm / w/ mxfp4
^ similar to this: triton-lang/triton#6054 but make the same changes in _inductor/utils.py as we did for float8em0

Numerics

Script: https://gist.github.com/drisspg/4024ed055a6db911495102614c674c4c -> still emulating till we fix this bug in cublaslt bindings
Double quant really helps w/ tensor that have very small amax values, likely by reducing the amount of underflows will verify:
nvfp4_gelu_performance_heatmap

Copy link

pytorch-bot bot commented Jun 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2408

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit d5bded3 with merge base 4e25496 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

drisspg added a commit that referenced this pull request Jun 18, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from c58c5b0 to 3948f5d Compare June 18, 2025 20:33
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 18, 2025
drisspg added a commit that referenced this pull request Jun 18, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 3948f5d to 1025236 Compare June 18, 2025 21:05
drisspg added a commit that referenced this pull request Jun 18, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 1025236 to 1c007a4 Compare June 18, 2025 21:30
@drisspg drisspg added mx topic: new feature Use this tag if this PR adds a new feature labels Jun 19, 2025
drisspg added a commit that referenced this pull request Jun 19, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 1c007a4 to a3d2874 Compare June 19, 2025 04:19
drisspg added a commit that referenced this pull request Jun 19, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from a3d2874 to 034f892 Compare June 19, 2025 04:26
drisspg added a commit that referenced this pull request Jun 19, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 034f892 to 92e0622 Compare June 19, 2025 04:27
drisspg added a commit that referenced this pull request Jun 19, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 92e0622 to b2c45a1 Compare June 19, 2025 04:38
drisspg added a commit that referenced this pull request Jun 19, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from b2c45a1 to 7448f45 Compare June 19, 2025 04:56
drisspg added a commit that referenced this pull request Jun 19, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 7448f45 to fad58b5 Compare June 19, 2025 16:00
drisspg added a commit that referenced this pull request Jun 19, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from fad58b5 to b5a593d Compare June 19, 2025 16:03
drisspg added a commit that referenced this pull request Jun 19, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from b5a593d to 2b4ba64 Compare June 19, 2025 16:31
drisspg added a commit that referenced this pull request Jun 19, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 2b4ba64 to 5d50579 Compare June 19, 2025 23:00
drisspg added a commit that referenced this pull request Jun 19, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 5d50579 to 29fa9ef Compare June 19, 2025 23:36
@drisspg drisspg marked this pull request as ready for review June 22, 2025 02:53
drisspg added a commit that referenced this pull request Jun 22, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from d85d39a to 7f3dc05 Compare June 22, 2025 23:59
drisspg added a commit that referenced this pull request Jun 23, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 7f3dc05 to 4dbd14e Compare June 23, 2025 03:48
drisspg added a commit that referenced this pull request Jun 23, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 4dbd14e to 788e593 Compare June 23, 2025 18:07
drisspg added a commit that referenced this pull request Jun 23, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 788e593 to e35338c Compare June 23, 2025 19:00
drisspg added a commit that referenced this pull request Jun 23, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from e35338c to 47bcbb8 Compare June 23, 2025 20:50
@drisspg
Copy link
Contributor Author

drisspg commented Jun 23, 2025

Weight only fails w/ compile and bisected to:

Likely from the work around to get triton to not error on e2m1

Disabling lowerings fixed the issue.
Starting bisect by getting upper bound.
Upper bound of 38 found for inductor.
Bisecting inductor - lowerings (Range: [0, 38], Midpoint: 19)
Bisecting inductor - lowerings (Range: [20, 38], Midpoint: 29)
Bisecting inductor - lowerings (Range: [30, 38], Midpoint: 34)
Bisecting inductor - lowerings (Range: [35, 38], Midpoint: 36)
Bisecting inductor - lowerings (Range: [35, 36], Midpoint: 35)
Binary search completed for inductor - lowerings. The bisect number is 36. Debug info: convert_element_type_5
Bisection status deleted.
   Bisection result: BisectionResult(backend='inductor', subsystem='lowerings', bisect_number=36, debug_info='convert_element_type_5')

6. Testing inductor config workarounds for WEIGHT_ONLY:
   {'inductor.coordinate_descent_tuning': False}           ERROR
   {'inductor.force_fuse_int_mm_with_mul': False}          ERROR
   {'inductor.post_grad_passes': False}                    ERROR
   {'inductor.pattern_matcher': False}                     ERROR
   {'inductor.epilogue_fusion': False}                     ERROR
   {'inductor.max_autotune': False}                        ERROR
   {'triton.autotune_pointwise': False}                    ✗ 3.1dB
   {'inductor.benchmark_kernel': False}                    ERROR
   {'inductor.aggressive_fusion': False}                   ERROR

7. Testing other compile backends:
   Backend 'eager': SQNR = 20.00 dBBackend 'aot_eager': SQNR = 20.00 dBskipping cudagraphs due to skipping cudagraphs due to cpu device (_tensor_constant0). Found from : 
   File "/home/drisspg/.conda/envs/nightly/lib/python3.12/site-packages/torch/_dynamo/external_utils.py", line 70, in inner
    return fn(*args, **kwargs)

   Backend 'cudagraphs': SQNR = 20.00 dB

drisspg added a commit that referenced this pull request Jun 23, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 47bcbb8 to c50c936 Compare June 23, 2025 21:43
drisspg added a commit that referenced this pull request Jun 23, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from c50c936 to 22ac909 Compare June 23, 2025 21:43
drisspg added a commit that referenced this pull request Jun 23, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from 22ac909 to e91b055 Compare June 23, 2025 21:46
drisspg added a commit that referenced this pull request Jun 23, 2025
stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from e91b055 to efdd0b1 Compare June 23, 2025 22:09
@drisspg drisspg mentioned this pull request Jun 23, 2025
@drisspg drisspg force-pushed the drisspg/stack/78 branch from efdd0b1 to b4f3d1d Compare June 23, 2025 22:36
@drisspg
Copy link
Contributor Author

drisspg commented Jun 23, 2025

@vkuzo updated to use the mm_config

@drisspg drisspg requested review from vkuzo and gau-nernst June 23, 2025 22:53
Copy link
Collaborator

@gau-nernst gau-nernst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some comments

Comment on lines +193 to +204
assert self.activation_dtype == torch.float4_e2m1fn_x2, (
f"NVFP4 requires activation_dtype=float4_e2m1fn_x2, got {self.activation_dtype}"
)
assert self.weight_dtype == torch.float4_e2m1fn_x2, (
f"NVFP4 requires weight_dtype=float4_e2m1fn_x2, got {self.weight_dtype}"
)
assert self.scale_dtype == torch.float8_e4m3fn, (
f"NVFP4 requires scale_dtype=float8_e4m3fn, got {self.scale_dtype}"
)
assert self.block_size == 16, (
f"NVFP4 requires block_size=16, got {self.block_size}"
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just curious. what's the point of exposing all of these when only a specific value is accepted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good point, my original intent was to not make another subclass, and just merge in w/ mxfp, cc @vkuzo I imagine that we wan't this separated? I started to work on a observer for this since without it this just a worse mxfp4.

Comment on lines +428 to +429
a_scale_blocked = to_blocked(a_scale)
b_scale_blocked = to_blocked(b_scale)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have wondered about this for MX-dtype as well. It makes sense for MX-dtype to have scale swizzling here since we may want the layout to be vendor-neutral. But NVFP4 is specific to NVIDIA, so why not put this under to_nvfp4()?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good question, for mx I recently confirmed that AMD does not require swizziling, this is a good point about NVFP4, I am actually going update this code path to cache the swizzled layout for the weight

stack-info: PR: #2408, branch: drisspg/stack/78
@drisspg drisspg force-pushed the drisspg/stack/78 branch from b4f3d1d to d5bded3 Compare June 24, 2025 04:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. mx topic: new feature Use this tag if this PR adds a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants