-
Notifications
You must be signed in to change notification settings - Fork 504
fix: remove stream_usage from text completion #1285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #1285 +/- ##
========================================
Coverage 69.78% 69.79%
========================================
Files 161 161
Lines 16057 16058 +1
========================================
+ Hits 11206 11207 +1
Misses 4851 4851
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR removes the unsupported stream_usage
parameter from the kwargs before initializing text completion providers to prevent API errors.
- Strip out
stream_usage
so text completion calls don’t receive an invalid argument. - Ensures compatibility with providers like OpenAI’s AsyncCompletions.
Comments suppressed due to low confidence (1)
nemoguardrails/llm/models/langchain_initializer.py:258
- Add a unit test to verify that passing
stream_usage
into_init_text_completion_model
no longer raises an unexpected keyword argument error, ensuring this change is covered.
kwargs.pop("stream_usage", None)
@@ -253,6 +253,9 @@ def _init_text_completion_model( | |||
if provider_cls is None: | |||
raise ValueError() | |||
kwargs = _update_model_kwargs(provider_cls, model_name, kwargs) | |||
# remove stream_usage parameter as it's not supported by text completion APIs | |||
# (e.g., OpenAI's AsyncCompletions.create() doesn't accept this parameter) | |||
kwargs.pop("stream_usage", None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Consider extracting the "stream_usage" key into a shared constant or documenting it in the function docstring to avoid magic strings and make the removal rationale more discoverable.
kwargs.pop("stream_usage", None) | |
kwargs.pop(STREAM_USAGE_KEY, None) |
Copilot uses AI. Check for mistakes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good fix for now, but we should try to find something more generic for unsupported attributes in kwargs
for Langchain LLM providers. Let's merge this and talk in private about a generic fix.
This pull request includes a small but important change to the
_init_text_completion_model
function inlangchain_initializer.py
. The change ensures compatibility with text completion APIs by removing the unsupportedstream_usage
parameter from thekwargs
dictionary before initializing the provider class.TODO:
ensure current implementation for token usage tracking does not break for other providers see #1264