Skip to content

fix: compbined projection#1324

Open
ZhiyuLi-Nvidia wants to merge 2 commits intomainfrom
zhiyul/fix_combined_projection
Open

fix: compbined projection#1324
ZhiyuLi-Nvidia wants to merge 2 commits intomainfrom
zhiyul/fix_combined_projection

Conversation

@ZhiyuLi-Nvidia
Copy link
Contributor

@ZhiyuLi-Nvidia ZhiyuLi-Nvidia commented Feb 18, 2026

What does this PR do ?

Problem

The custom model's combined projections (qkv_proj, gate_up_proj) use ColwiseParallel for tensor parallelism, but the weight layout was a naive concatenation:

  • QKV: [all Q rows | all K rows | all V rows]
  • gate_up: [all gate rows | all up rows]
    With ColwiseParallel (which shards dim 0 evenly), this gives each TP rank the wrong mix of Q/K/V heads — e.g., rank 0 gets all Q and some K, rank 1 gets remaining K and all V. This produces silently incorrect results, especially under GQA where Q and KV head counts differ.

Changelog

Interleaved weight layout so that ColwiseParallel sharding naturally gives each rank complete, matched groups:

  • QKV: KV-head-grouped layout [Q_group_0 | K_0 | V_0 | Q_group_1 | K_1 | V_1 | ...] — each TP rank gets whole KV-head groups with their corresponding Q heads.
  • gate_up: Row-interleaved layout [gate_0, up_0, gate_1, up_1, ...] — each TP rank gets matched gate/up pairs.

Also included: DCP-based base model loading option

Tests

Match hf and custom implementation loss curve

If you haven't finished some of the above items you can still open "Draft" PR.

Additional Information

  • Related to # (issue)

Signed-off-by: Zhiyu Li <zhiyul@NVIDIA.com>
@copy-pr-bot
Copy link

copy-pr-bot bot commented Feb 18, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@ZhiyuLi-Nvidia
Copy link
Contributor Author

/ok to test 4b3ac35

Signed-off-by: Zhiyu Li <zhiyul@NVIDIA.com>
@ZhiyuLi-Nvidia
Copy link
Contributor Author

/ok to test 0687630

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments