-
-
Notifications
You must be signed in to change notification settings - Fork 6.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: qwen2.5-omni model failed to start #15864
Comments
You need to install the transformers PR branch from source, since it's not merged into main branch of transformers yet. |
I downloaded it through the official branch pip install git+https://github.com/huggingface/transformers@f742a644ca32e65758c3adb36225aef1731bd2a8 |
Can you check out the discussion in #15754 (comment)? See if it can solve your issue |
Still no good, the startup still failed. pip install git+https://github.com/BakerBunker/transformers.git@qwen25omni export CUDA_VISIBLE_DEVICES=1,2 && export VLLM_USE_V1=0 && vllm serve Qwen2.5-Omni-7B |
Please note that you also need to install PR branch for vLLM: #15130 |
Still no good, the startup still failed. |
Hi @hackerHiJu , you need to pull the latest model & config from HuggingFace repo. we have updated tokenizer_config.json and other configs. You can also check this issue. |
Your current environment
vllm:0.8.2
transformers:4.51.0.dev0
export VLLM_USE_V1=0 && vllm serve Qwen2.5-Omni-7B --dtype half --cpu-offload-gb 1 --gpu-memory-utilization 0.9 --host 0.0.0.0 --port 9000
🐛 Describe the bug
qwen2.5-omni model failed to start
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: