Skip to content

Commit 297c2a0

Browse files
authored
prompt_caching.md: Fix wrong prompt_tokens definition (#16044)
1 parent f747a4a commit 297c2a0

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/my-website/docs/completion/prompt_caching.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ For the supported providers, LiteLLM follows the OpenAI prompt caching usage obj
2727
}
2828
```
2929

30-
- `prompt_tokens`: These are the non-cached prompt tokens (same as Anthropic, equivalent to Deepseek `prompt_cache_miss_tokens`).
30+
- `prompt_tokens`: These are all prompt tokens including cache-miss and cache-hit input tokens.
3131
- `completion_tokens`: These are the output tokens generated by the model.
3232
- `total_tokens`: Sum of prompt_tokens + completion_tokens.
3333
- `prompt_tokens_details`: Object containing cached_tokens.

0 commit comments

Comments
 (0)