-
-
Notifications
You must be signed in to change notification settings - Fork 9k
[Frontend] Add request_id to the Request object so they can be controlled better via external load balancers #21009
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Kourosh Hakhamaneshi <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a request_id
field to the CompletionRequest
and EmbeddingRequest
objects, allowing external systems to control request identifiers. The implementation correctly integrates this new field into the request ID generation logic. My main feedback is to address the code duplication of the request_id
field definition across multiple classes in vllm/entrypoints/openai/protocol.py
to improve long-term maintainability.
request_id: str = Field( | ||
default_factory=lambda: f"{random_uuid()}", | ||
description=( | ||
"The request_id related to this request. If the caller does " | ||
"not set it, a random_uuid will be generated. This id is used " | ||
"through out the inference process and return in response."), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The definition for the request_id
field is duplicated across CompletionRequest
, EmbeddingCompletionRequest
, and EmbeddingChatRequest
in this PR. A similar definition also exists in ChatCompletionRequest
. This duplication can lead to maintenance issues in the future (e.g., if the description or default factory needs to be updated, it must be changed in multiple places).
To improve maintainability and adhere to the DRY (Don't Repeat Yourself) principle, I suggest defining the Field
object once and reusing it in all relevant request classes.
For example, you could define a shared field at the module level:
# At an appropriate module-level scope
_REQUEST_ID_FIELD = Field(
default_factory=lambda: f"{random_uuid()}",
description=(
"The request_id related to this request. If the caller does "
"not set it, a random_uuid will be generated. This id is used "
"through out the inference process and return in response."),
)
# In each request class
class CompletionRequest(OpenAIBaseModel):
# ...
request_id: str = _REQUEST_ID_FIELD
# ...
This change should be applied to EmbeddingCompletionRequest
and EmbeddingChatRequest
as well.
request_id: str = Field( | ||
default_factory=lambda: f"{random_uuid()}", | ||
description=( | ||
"The request_id related to this request. If the caller does " | ||
"not set it, a random_uuid will be generated. This id is used " | ||
"through out the inference process and return in response."), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
request_id: str = Field( | ||
default_factory=lambda: f"{random_uuid()}", | ||
description=( | ||
"The request_id related to this request. If the caller does " | ||
"not set it, a random_uuid will be generated. This id is used " | ||
"through out the inference process and return in response."), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
This PR adds a usability feature to protocol so that the external load balancers or systems like Prefill desegregation can better control the request id based on their internal design decisions. This capability already exists for ChatRequests but not for Embedding and Completions.
Test Plan
Test Result
(Optional) Documentation Update