You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Great job!! But I noticed that OpenR1 supports accelerated inference using vllm, which can significantly improve performance. However, I couldn't find any integration of this feature in this repository.
Is this an intentional omission, or have I possibly overlooked some documentation or configuration option that enables vllm support?
The text was updated successfully, but these errors were encountered:
Hiiii,
Great job!! But I noticed that OpenR1 supports accelerated inference using vllm, which can significantly improve performance. However, I couldn't find any integration of this feature in this repository.
Is this an intentional omission, or have I possibly overlooked some documentation or configuration option that enables vllm support?
The text was updated successfully, but these errors were encountered: