Skip to content

tests : add non-cont K,V FA tests #14756

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 23, 2025
Merged

tests : add non-cont K,V FA tests #14756

merged 2 commits into from
Jul 23, 2025

Conversation

ggerganov
Copy link
Member

@ggerganov ggerganov commented Jul 18, 2025

cont #14363

With the introduction of a split KV cache, the K and V tensors passed to FA can now be non-contiguous. Add tests in test-backend-ops to cover this.

This issue was reported here: #14363 (comment)

It can be reproduced with this command with CUDA backend:

make -j && LLAMA_SET_ROWS=1 ./bin/llama-parallel -hf ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF -np 8 -ns 128 -s 1 -c 4096 -fa -ngl 99 --top-k 1 -ctk q8_0 -ctv q8_0
0.02.205.072 I common_init_from_params: setting dry_penalty_last_n to ctx_size = 4608
0.02.205.072 W common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
0.02.233.146 I No new questions so proceed with build-in defaults.
0.02.233.146 I 

0.02.240.868 I main: Simulating parallel requests from clients:
0.02.240.870 I main: n_parallel = 8, n_sequences = 128, cont_batching = 1, system tokens = 273
0.02.240.870 I 
0.02.240.870 I Processing requests ...

0.02.241.045 I main: clearing the KV cache
0.02.248.040 I Client   0, seq    0, junk =    0, prompt = 284, started decoding ...
0.02.254.999 I Client   1, seq    1, junk =    0, prompt = 284, started decoding ...
0.02.262.112 I Client   2, seq    2, junk =    0, prompt = 284, started decoding ...
0.02.269.266 I Client   3, seq    3, junk =    0, prompt = 290, started decoding ...
0.02.276.355 I Client   4, seq    4, junk =    0, prompt = 288, started decoding ...
0.02.283.337 I Client   5, seq    5, junk =    0, prompt = 285, started decoding ...
0.02.290.405 I Client   6, seq    6, junk =    0, prompt = 286, started decoding ...
0.02.297.367 I Client   7, seq    7, junk =    0, prompt = 284, started decoding ...
/home/ggerganov/development/github/llama.cpp/ggml/src/ggml-cuda/template-instances/../fattn-common.cuh:748: GGML_ASSERT(ggml_is_contiguously_allocated(K)) failed

cc @JohannesGaessler

@github-actions github-actions bot added the testing Everything test related label Jul 18, 2025
@ggerganov ggerganov mentioned this pull request Jul 18, 2025
23 tasks
OrangeDoro

This comment was marked as off-topic.

* CUDA: fix quantized KV cache + multiple sequences

* Update ggml/src/ggml-cuda/fattn-common.cuh

Co-authored-by: Georgi Gerganov <[email protected]>

---------

Co-authored-by: Georgi Gerganov <[email protected]>
@JohannesGaessler JohannesGaessler self-requested a review as a code owner July 23, 2025 10:35
@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Jul 23, 2025
@ggerganov ggerganov merged commit 07a19e2 into master Jul 23, 2025
47 checks passed
@ggerganov ggerganov deleted the gg/tests-fa-non-cont branch July 23, 2025 11:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs testing Everything test related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants