Replies: 1 comment 2 replies
-
@aswad546 did you found a way to do this? I'm looking for the exact same thing. Thanks! |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello all,
I am using a fine tuned version of Qwen 2VL for a multimodal LLM application and I need to make many queries to the model at the same time. I was wondering if VLLM provides dynamic batching for multimodal inputs out of the box or if there is something special I have to do to make sure this is the case. If someone could point me in the right direction I would be very grateful.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions