-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] AzureOpenAIServerModel Sends Unsupported 'stop' Parameter for o1-mini #554
Comments
We can implement a workaround by subclassing the Solution Codeimport os
from dotenv import load_dotenv
from smolagents import CodeAgent
from smolagents.models import AzureOpenAIServerModel
load_dotenv(override=True)
class PatchedAzureOpenAIServerModel(AzureOpenAIServerModel):
def _prepare_completion_kwargs(self, *args, **kwargs):
completion_kwargs = super()._prepare_completion_kwargs(*args, **kwargs)
# Remove the 'stop' parameter if it exists
if 'stop' in completion_kwargs:
del completion_kwargs['stop']
return completion_kwargs
if __name__ == "__main__":
model = PatchedAzureOpenAIServerModel(
model_id="o1-mini",
api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
api_version=os.environ.get("AZURE_OPENAI_API_VERSION"),
azure_endpoint=os.environ.get("AZURE_OPENAI_API_BASE"),
custom_role_conversions={"system": "assistant", "tool-call": "assistant", "tool-response": "user"}
)
agent = CodeAgent(tools=[], model=model, add_base_tools=True)
agent.run("Could you give me the 118th number in the Fibonacci sequence?") What are we doing?
|
Fyi, Looking at the code, it seems like the model's |
Describe the bug
When using the AzureOpenAIServerModel with the o1-mini deployment, the request being sent includes a "stop" parameter, which is not supported by the model. This results in a 400 error with the message:
“Unsupported parameter: 'stop' is not supported with this model.”
Code to reproduce the error
Error logs (if any)
Expected behavior
The request to Azure OpenAI should succeed without including a "stop" parameter when using the o1-mini model. The API call should complete successfully and return a valid response without triggering a 400 error.
Packages version:
smolagents==1.8.0
Additional context
The error appears to be caused by the internal method
_prepare_completion_kwargs
in theclass AzureOpenAIServerModel
class, which adds a "stop" parameter to the API request. A possible workaround is to override this method to remove the "stop" parameter if present. For example:The text was updated successfully, but these errors were encountered: