Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Model][VLM] Add Qwen2.5-Omni model support (thinker only) #15130

Open
wants to merge 28 commits into
base: main
Choose a base branch
from

Conversation

fyabc
Copy link
Contributor

@fyabc fyabc commented Mar 19, 2025

This PR adding support for Qwen2.5-Omni model (thinker only).

Requirements

This PR requires this corresponding transformers PR.

pip install git+https://github.com/BakerBunker/transformers.git@qwen25omni 

Note: You need to install transformers from source from that branch

Example Usage

# Audio + image + video
python examples/offline_inference/qwen2_5_omni/only_thinker.py -q mixed_modalities

# Read vision and audio inputs from a single video file
# NOTE: V1 engine does not support interleaved modalities yet.
VLLM_USE_V1=0 python examples/offline_inference/qwen2_5_omni/only_thinker.py -q use_audio_in_video

# Process audio inputs
python examples/offline_inference/audio_language.py --model-type qwen2_5_omni

# Process image inputs
python examples/offline_inference/vision_language.py --modality image --model-type qwen2_5_omni

# Process video inputs
python examples/offline_inference/vision_language.py --modality video --model-type qwen2_5_omni

Notes

The whole Qwen2.5-Omni model includes three parts:

  • thinker: multimodal inputs -> text responses & hidden states
  • talker: text responses & hidden states from thinker -> speech codes
  • code2wav (streaming codec decoder): codes -> speech

This PR only implements the thinker part now, it accepts multimodal inputs (images / videos / audios), and generate text responses, similar to other common VLMs.
We have also develped an end-to-end implementation (will be released soon), but due to its significant impact on the vLLM framework architecture, we will not create the related pull request for now.

FIX #15563

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added documentation Improvements or additions to documentation multi-modality Related to multi-modality (#4194) v1 labels Mar 19, 2025
@DarkLight1337
Copy link
Member

Sorry I don't have time to review in detail tonight, but from a quick glance, can you add this model to the following pages?

  • Supported Models page
  • tests/models/registry.py (set is_available_online=False to pass CI until the model repo is released on HF)
  • tests/models/multimodal/processing/test_common.py
  • tests/models/decoder_only/vision_language/test_models.py (optional for now)

@fyabc
Copy link
Contributor Author

fyabc commented Mar 19, 2025

Sorry I don't have time to review in detail tonight, but from a quick glance, can you add this model to the following pages?

  • Supported Models page
  • tests/models/registry.py (set is_available_online=False to pass CI until the model repo is released on HF)
  • tests/models/multimodal/processing/test_common.py
  • tests/models/decoder_only/vision_language/test_models.py (optional for now)

OK,I will add them tomorrow.

@ywang96 ywang96 self-assigned this Mar 19, 2025
@yangninghua
Copy link

@fyabc Qwen/Qwen2.5-Omni-7B ??

@ywang96
Copy link
Member

ywang96 commented Mar 21, 2025

Sorry for the delay - going to take a look at this PR tonight!

Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the contribution! I have left some comments!

@fyabc
Copy link
Contributor Author

fyabc commented Mar 24, 2025

Hi @ywang96 @DarkLight1337 , I update some other examples here, please check the code.

@mergify mergify bot added the needs-rebase label Mar 30, 2025
@mergify mergify bot removed the needs-rebase label Mar 31, 2025
Signed-off-by: Roger Wang <[email protected]>
@ywang96
Copy link
Member

ywang96 commented Mar 31, 2025

Looks like this PR doesn't work with huggingface/transformers#36752 yet

ERROR 03-31 07:57:21 [core.py:377]   File "/tmp-nvme/myenv/lib/python3.12/site-packages/transformers/processing_utils.py", line 1082, in from_pretrained
ERROR 03-31 07:57:21 [core.py:377]     return cls.from_args_and_dict(args, processor_dict, **kwargs)
ERROR 03-31 07:57:21 [core.py:377]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-31 07:57:21 [core.py:377]   File "/tmp-nvme/myenv/lib/python3.12/site-packages/transformers/processing_utils.py", line 876, in from_args_and_dict
ERROR 03-31 07:57:21 [core.py:377]     processor = cls(*args, **processor_dict)
ERROR 03-31 07:57:21 [core.py:377]                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-31 07:57:21 [core.py:377]   File "/tmp-nvme/myenv/lib/python3.12/site-packages/transformers/models/qwen2_5_omni/processing_qwen2_5_omni.py", line 70, in __init__
ERROR 03-31 07:57:21 [core.py:377]     self.image_token = self.tokenizer.image_token
ERROR 03-31 07:57:21 [core.py:377]                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-31 07:57:21 [core.py:377]   File "/tmp-nvme/myenv/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 1108, in __getattr__
ERROR 03-31 07:57:21 [core.py:377]     raise AttributeError(f"{self.__class__.__name__} has no attribute {key}")
ERROR 03-31 07:57:21 [core.py:377] AttributeError: Qwen2TokenizerFast has no attribute image_token
ERROR 03-31 07:57:21 [core.py:377] 
CRITICAL 03-31 07:57:21 [core_client.py:343] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
[1]    180096 killed     python examples/offline_inference/qwen2_5_omni/only_thinker.py -q 

@fyabc
Copy link
Contributor Author

fyabc commented Mar 31, 2025

Looks like this PR doesn't work with huggingface/transformers#36752 yet

ERROR 03-31 07:57:21 [core.py:377]   File "/tmp-nvme/myenv/lib/python3.12/site-packages/transformers/processing_utils.py", line 1082, in from_pretrained
ERROR 03-31 07:57:21 [core.py:377]     return cls.from_args_and_dict(args, processor_dict, **kwargs)
ERROR 03-31 07:57:21 [core.py:377]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-31 07:57:21 [core.py:377]   File "/tmp-nvme/myenv/lib/python3.12/site-packages/transformers/processing_utils.py", line 876, in from_args_and_dict
ERROR 03-31 07:57:21 [core.py:377]     processor = cls(*args, **processor_dict)
ERROR 03-31 07:57:21 [core.py:377]                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-31 07:57:21 [core.py:377]   File "/tmp-nvme/myenv/lib/python3.12/site-packages/transformers/models/qwen2_5_omni/processing_qwen2_5_omni.py", line 70, in __init__
ERROR 03-31 07:57:21 [core.py:377]     self.image_token = self.tokenizer.image_token
ERROR 03-31 07:57:21 [core.py:377]                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-31 07:57:21 [core.py:377]   File "/tmp-nvme/myenv/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 1108, in __getattr__
ERROR 03-31 07:57:21 [core.py:377]     raise AttributeError(f"{self.__class__.__name__} has no attribute {key}")
ERROR 03-31 07:57:21 [core.py:377] AttributeError: Qwen2TokenizerFast has no attribute image_token
ERROR 03-31 07:57:21 [core.py:377] 
CRITICAL 03-31 07:57:21 [core_client.py:343] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
[1]    180096 killed     python examples/offline_inference/qwen2_5_omni/only_thinker.py -q 

I will take a look at it.

fyabc and others added 4 commits March 31, 2025 20:10
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Much thanks for making this contribution to vLLM!

I did a few fixes & code changes and confirmed now that the examples of this model work on both V1 and V0 (with use_audio_in_video supported by V0 only), so the only blocker we have is to wait for huggingface/transformers#36752 to be merged!

ywang96 added 2 commits March 31, 2025 23:06
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Copy link

mergify bot commented Apr 1, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @fyabc.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Apr 1, 2025
Comment on lines +218 to +224
self.qkv = MergedColumnParallelLinear(
input_size=embed_dim,
output_sizes=[projection_size] * 3,
bias=True,
quant_config=quant_config,
prefix=f"{prefix}.qkv",
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After some Investigation it's discovered that this change actually introduced some regression for Qwen2.5VL inference, so I'm blocking this until we resolve the issue.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found that it works well when tp is 1, but the results are not quite right when tp > 1. I am currently investigating further.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) speculative-decoding structured-output v1
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

[New Model]: please surport for Qwen/Qwen2.5-Omni-7B
7 participants