You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the ChatGoogleGenerativeAI class in streaming mode, the tokenUsage object is always returned as an empty object, even though the usage metadata is expected to be updated during the streaming process.
From analyzing the code, it appears that tokenUsage is not being updated within the _generate method in streaming mode. The chunk data from _streamResponseChunks includes usageMetadata, but it is not being added to the tokenUsage object.
System Info
node v20.18.1
The text was updated successfully, but these errors were encountered:
Hey there, @Zamoca42! 👋 I'm here to help you with any bugs, questions, or contributions you have in mind. Let's tackle this together!
There is a similar closed issue regarding the tokenUsage field being unavailable when streamRunnable is set to true in AgentExecutor. The suggested solution was to ensure that the model configuration includes the necessary settings to track token usage, specifically setting the streamUsage option to true. However, it was noted that even with this configuration, output.llmOutput?.tokenUsage remained undefined when streamRunnable was true, but token usage details appeared in output.generations[1].
Additionally, the pull request Feat/streaming token usage addressed token usage reporting in streaming mode by implementing callback handlers that capture token usage data. This ensures that token usage is accurately reported in both streaming and non-streaming modes [2].
In your case, it seems that the usageMetadata is not being correctly accumulated during the streaming process. The logic for accumulating the token usage across multiple chunks might not be correctly implemented, leading to an empty estimatedTokenUsage object [3]. You might want to verify if the streamUsage option is correctly set and if the callback handlers are properly capturing the token usage data.
Checked other resources
Example Code
Steps to Reproduce
handleLLMEnd
that includes a console.log to inspect thellmOutput
field.estimatedTokenUsage
object in thellmOutput
field is empty ({}).Example code to reproduce:
Error Message and Stack Trace (if applicable)
{
generations: [ [ [ChatGenerationChunk] ] ],
llmOutput: { estimatedTokenUsage: {} }
}
Description
When using the ChatGoogleGenerativeAI class in streaming mode, the tokenUsage object is always returned as an empty object, even though the usage metadata is expected to be updated during the streaming process.
From analyzing the code, it appears that tokenUsage is not being updated within the _generate method in streaming mode. The chunk data from _streamResponseChunks includes usageMetadata, but it is not being added to the tokenUsage object.
System Info
node v20.18.1
The text was updated successfully, but these errors were encountered: