Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix qwen2 vl crash in continous batching #3004

Merged
merged 1 commit into from
Feb 20, 2025

Conversation

sywangyi
Copy link
Contributor

@OlivierDehaene OR @Narsil

File "/usr/src/server/text_generation_server/interceptor.py", line 24, in intercept
return await response
File "/opt/conda/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 120, in _unary_interceptor
raise error
File "/opt/conda/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 111, in _unary_interceptor
return await behavior(request_or_iterator, context)
File "/usr/src/server/text_generation_server/server.py", line 212, in Decode
batch = self.model.batch_type.concatenate(batches)
File "/opt/conda/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
File "/usr/src/server/text_generation_server/models/vlm_causal_lm.py", line 201, in concatenate
batch = super(VlmCausalLMBatch, cls).concatenate(batches)
File "/opt/conda/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 838, in concatenate
position_ids[start_index:end_index] = batch.position_ids
RuntimeError: The expanded size of the tensor (1) must match the existing size (3) at non-singleton dimension 0. Target sizes: [1]. Tensor sizes: [3]
2025-02-10T09:58:20.846762Z ERROR batch{batch_size=2}:decode:decode{size=2}:decode{size=2}: text_generation_router_v3::client: backends/v3/src/client/mod.rs:45: Server error: The expanded size of the tensor (1) must match the existing size (3) at non-singleton dimension 0. Target sizes: [1]. Tensor sizes: [3]

@sywangyi
Copy link
Contributor Author

@drbh please help review

@sywangyi
Copy link
Contributor Author

@sywangyi
Copy link
Contributor Author

easy to reproduce:

server:
text-generation-launcher --model-id=Qwen/Qwen2-VL-7B-Instruct

client:
curl -N 0.0.0.0:80/generate_stream
-X POST
-d '{"inputs":"What is in the picture?\n\n","parameters":{"max_new_tokens":100, "seed": 42, "do_sample":true}}'
-H 'Content-Type: application/json' &

curl -N 0.0.0.0:80/generate_stream
-X POST
-d '{"inputs":"What is in the picture?\n\n","parameters":{"max_new_tokens":100, "seed": 42, "do_sample":true}}'
-H 'Content-Type: application/json' &

@drbh drbh mentioned this pull request Feb 20, 2025
Copy link
Collaborator

@drbh drbh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thanks for the fix!

@drbh drbh merged commit 06dfe9a into huggingface:main Feb 20, 2025
29 of 37 checks passed
@sywangyi sywangyi deleted the qwen2_vl_crash branch February 24, 2025 01:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants