You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### 🔍 Overview
This update performs some enhancements to the LLM configuration screen. In particular, it renames the UI for the number of tokens for the prompt to "Context window" since the naming can be confusing to the user. Additionally, it adds a new optional field called "Max output tokens".
Copy file name to clipboardExpand all lines: config/locales/client.en.yml
+4-2
Original file line number
Diff line number
Diff line change
@@ -397,7 +397,8 @@ en:
397
397
name: "Model id"
398
398
provider: "Provider"
399
399
tokenizer: "Tokenizer"
400
-
max_prompt_tokens: "Number of tokens for the prompt"
400
+
max_prompt_tokens: "Context window"
401
+
max_output_tokens: "Max output tokens"
401
402
url: "URL of the service hosting the model"
402
403
api_key: "API Key of the service hosting the model"
403
404
enabled_chat_bot: "Allow AI bot selector"
@@ -480,7 +481,8 @@ en:
480
481
failure: "Trying to contact the model returned this error: %{error}"
481
482
482
483
hints:
483
-
max_prompt_tokens: "Max numbers of tokens for the prompt. As a rule of thumb, this should be 50% of the model's context window."
484
+
max_prompt_tokens: "The maximum number of tokens the model can process in a single request"
485
+
max_output_tokens: "The maximum number of tokens the model can generate in a single request"
484
486
display_name: "The name used to reference this model across your site's interface."
485
487
name: "We include this in the API call to specify which model we'll use"
486
488
vision_enabled: "If enabled, the AI will attempt to understand images. It depends on the model being used supporting vision. Supported by latest models from Anthropic, Google, and OpenAI."
0 commit comments