Skip to content

Conversation

@JerryWu1234
Copy link
Contributor

@JerryWu1234 JerryWu1234 commented Nov 7, 2025

what changed

#1204 related to ai bug vercel/ai#7913

  • Introduced a new DeepseekAIClient class for handling chat completions with the Deepseek API.
  • Updated LLMProvider to include support for the Deepseek model, allowing it to instantiate DeepseekAIClient based on the model name.
  • Modified the AvailableModel type to include "deepseek" as a valid model provider.

test plan

  • Ensure that the new client correctly handles chat completion requests and integrates seamlessly with existing LLM functionality.

why

what changed

test plan

# what changed
- Introduced a new `DeepseekAIClient` class for handling chat completions with the Deepseek API.
- Updated `LLMProvider` to include support for the Deepseek model, allowing it to instantiate `DeepseekAIClient` based on the model name.
- Modified the `AvailableModel` type to include "deepseek" as a valid model provider.

# test plan
- Ensure that the new client correctly handles chat completion requests and integrates seamlessly with existing LLM functionality.
@changeset-bot
Copy link

changeset-bot bot commented Nov 7, 2025

🦋 Changeset detected

Latest commit: 1db71b9

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 2 packages
Name Type
@browserbasehq/stagehand Patch
@browserbasehq/stagehand-evals Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Overview

Greptile Summary

Adds Deepseek API support by introducing a new DeepseekAIClient class that handles Deepseek's unique JSON format requirements. The implementation addresses the issue where Deepseek API requires the word "json" in prompts when using response_format: json_object.

Key Changes:

  • New DeepseekAIClient extends LLMClient with custom prompt handling that explicitly includes "json" keyword for schema-based requests
  • Updated LLMProvider to instantiate DeepseekAIClient for deepseek models
  • Added "deepseek" to ModelProvider type and model mapping

Critical Issues Found:

  • Model routing logic has a bug: slash-prefixed models like deepseek/deepseek-chat will bypass DeepseekAIClient entirely and use AISdkClient instead (checked on line 147 before the switch statement)
  • Missing null safety on response.choices[0] access (line 242 in DeepseekAIClient.ts)
  • Confusing prompt instruction that says "response must include the word 'json'" instead of "respond in JSON format"

Confidence Score: 2/5

  • This PR has a critical routing bug that prevents the new client from being used with slash-prefixed model names
  • Score reflects the critical logic error in LLMProvider routing that will cause slash-prefixed deepseek models to bypass the new DeepseekAIClient entirely, plus the null safety issue that could cause runtime errors
  • Pay close attention to packages/core/lib/v3/llm/LLMProvider.ts (routing logic bug) and packages/core/lib/v3/llm/DeepseekAIClient.ts (null safety issue)

Important Files Changed

File Analysis

Filename Score Overview
packages/core/lib/v3/llm/DeepseekAIClient.ts 2/5 New client for Deepseek API with custom JSON prompt handling to meet Deepseek's requirement. Has null safety issue on line 242 and confusing prompt instruction.
packages/core/lib/v3/llm/LLMProvider.ts 1/5 Added DeepseekAIClient integration, but slash-prefixed models (deepseek/deepseek-chat) won't use it due to routing logic on line 147-163.
packages/core/lib/v3/types/public/model.ts 5/5 Added "deepseek" to ModelProvider type - straightforward type addition with no issues.

Sequence Diagram

sequenceDiagram
    participant User
    participant Stagehand
    participant LLMProvider
    participant DeepseekAIClient
    participant DeepseekAPI
    
    User->>Stagehand: act/extract with model "deepseek-chat"
    Stagehand->>LLMProvider: getClient("deepseek-chat", options)
    
    alt model contains "/"
        LLMProvider->>LLMProvider: Route to AISdkClient
        Note over LLMProvider: Models like "deepseek/deepseek-chat"<br/>bypass DeepseekAIClient
    else model without "/"
        LLMProvider->>DeepseekAIClient: new DeepseekAIClient()
    end
    
    LLMProvider-->>Stagehand: client instance
    Stagehand->>DeepseekAIClient: createChatCompletion(options)
    
    alt with response_model (JSON schema)
        DeepseekAIClient->>DeepseekAIClient: Add schema + "json" keyword to prompt
        DeepseekAIClient->>DeepseekAIClient: Set responseFormat = "json_object"
    end
    
    DeepseekAIClient->>DeepseekAPI: POST /v1/chat/completions
    DeepseekAPI-->>DeepseekAIClient: response
    
    alt response_model validation
        DeepseekAIClient->>DeepseekAIClient: Parse JSON from response.choices[0].message.content
        DeepseekAIClient->>DeepseekAIClient: validateZodSchema()
        
        alt validation fails
            DeepseekAIClient->>DeepseekAIClient: Retry (up to 3 times)
        end
    end
    
    DeepseekAIClient-->>Stagehand: parsed data or full response
    Stagehand-->>User: result
Loading

Additional Comments (1)

  1. packages/core/lib/v3/llm/LLMProvider.ts, line 147-163 (link)

    logic: Models like deepseek/deepseek-chat will be routed to AISdkClient instead of the new DeepseekAIClient because this check happens first. Only models without the slash prefix (e.g., deepseek-chat) will use DeepseekAIClient. This means the custom logic in DeepseekAIClient (like the special prompt handling for JSON format) won't be applied to slash-prefixed models.

3 files reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

…rove data extraction safety

# what changed
- Modified the instruction for JSON responses to clarify that the response must be in JSON format.
- Added optional chaining to safely access the message content in the response, preventing potential runtime errors.

# test plan
- Verify that the updated JSON response instructions are correctly interpreted by the client.
- Ensure that the optional chaining prevents errors when the message content is undefined.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant