Model Parameters: Not all parameters apply to every model
Field | Type | Required | Description |
---|---|---|---|
temperature |
Optional[float] | ➖ | Only supported on chat and completion models. |
max_tokens |
Optional[float] | ➖ | Only supported on chat and completion models. |
top_k |
Optional[float] | ➖ | Only supported on chat and completion models. |
top_p |
Optional[float] | ➖ | Only supported on chat and completion models. |
frequency_penalty |
Optional[float] | ➖ | Only supported on chat and completion models. |
presence_penalty |
Optional[float] | ➖ | Only supported on chat and completion models. |
num_images |
Optional[float] | ➖ | Only supported on image models. |
seed |
Optional[float] | ➖ | Best effort deterministic seed for the model. Currently only OpenAI models support these |
format_ |
Optional[models.UpdatePromptFormat] | ➖ | Only supported on image models. |
dimensions |
Optional[str] | ➖ | Only supported on image models. |
quality |
Optional[str] | ➖ | Only supported on image models. |
style |
Optional[str] | ➖ | Only supported on image models. |
response_format |
OptionalNullable[models.UpdatePromptResponseFormat] | ➖ | An object specifying the format that the model must output. Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length. |
photo_real_version |
Optional[models.UpdatePromptPhotoRealVersion] | ➖ | The version of photoReal to use. Must be v1 or v2. Only available for leonardoai provider |
encoding_format |
Optional[models.UpdatePromptEncodingFormat] | ➖ | The format to return the embeddings |
reasoning_effort |
Optional[models.UpdatePromptReasoningEffort] | ➖ | Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response. |
budget_tokens |
Optional[float] | ➖ | Gives the model enhanced reasoning capabilities for complex tasks. A value of 0 disables thinking. The minimum budget tokens for thinking are 1024. The Budget Tokens should never exceed the Max Tokens parameter. Only supported by Anthropic |