Skip to content

Conversation

@kakj-go
Copy link

@kakj-go kakj-go commented Oct 17, 2025

  1. 针对以下接口对其 ResponseWriter 的 write 方法进行拦截从而获取大模型的响应进行敏感词过滤
func IsChatCompletionEndpoint(c *gin.Context) bool {
	return (c.Request.URL.Path == "/v1/chat/completions" && c.Request.Method == "POST") ||
		(c.Request.URL.Path == "/v1/responses" && c.Request.Method == "POST") ||
		(c.Request.URL.Path == "/v1/messages" && c.Request.Method == "POST") ||
		(c.Request.URL.Path == "/pg/chat/completions" && c.Request.Method == "POST") ||
		(strings.HasPrefix(c.Request.URL.Path, "/v1beta/models") && c.Request.Method == "POST")
}
  1. 增加了响应值敏感词过滤的前端开关,默认为 false 从而对于没有开启的用户没有影响, 开关的 config 名称使用之前 commit 中注释的 feat: 敏感词过滤

Summary by CodeRabbit

Release Notes

  • New Features

    • Added sensitive word filtering for AI-generated chat responses. When enabled, the system detects and blocks responses containing sensitive content before delivery to the user.
  • Configuration

    • New setting toggle added to enable or disable sensitive word checking for chat completion responses.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 17, 2025

Walkthrough

Introduces sensitive word filtering for LLM chat completion responses by adding a configuration flag, implementing a response writer to intercept and check content, detecting violations in both streamed and non-streamed formats, and returning appropriate errors when sensitive words are found.

Changes

Cohort / File(s) Summary
Configuration and Settings
setting/sensitive.go, model/option.go
Adds CheckSensitiveOnCompletionEnabled flag to settings and integrates it into option initialization and update flows. Introduces ShouldCheckCompletionSensitive() utility function.
Frontend UI
web/src/components/settings/OperationSetting.jsx, web/src/pages/Setting/Operation/SettingsSensitiveWords.jsx
Adds new boolean field CheckSensitiveOnCompletionEnabled to component state and renders a toggle switch in the sensitive words settings form.
Core Sensitive Word Filtering
service/sensitive.go
Implements comprehensive response filtering layer including SensitiveWordFilterResponseWriter type, streaming and non-streaming detection, multi-format error generation (SSE, JSON), utility functions for parsing and appending stream content, and IsChatCompletionEndpoint() routing helper. Exports constants for placeholders and error labels.
Request Processing
controller/relay.go
Conditionally wraps response writer when completion-sensitive checks are enabled on chat completion endpoints, intercepts downstream errors, normalizes error messages with request IDs, and checks writer for detected sensitive word violations before returning to client.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Relay as controller/relay.go
    participant Writer as SensitiveWordFilterResponseWriter
    participant LLM as Downstream API
    participant Filter as service/sensitive

    Client->>Relay: POST /chat/completions
    alt CheckSensitiveOnCompletionEnabled && IsChatCompletionEndpoint
        Relay->>Writer: Create SensitiveWordFilterResponseWriter
    end
    Relay->>LLM: Forward request
    LLM-->>Writer: Response (streamed or buffered)
    Writer->>Filter: Intercept & check for sensitive words
    alt Sensitive Words Detected
        Writer->>Filter: Create NewAPIError (ErrorCodeSensitiveWordsDetected)
        Filter-->>Writer: Error with end-user message
        Writer-->>Client: Error response (SSE or JSON format)
    else No Violations
        Writer-->>Client: Original response
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

The changes introduce a new filtering subsystem spanning multiple layers (configuration, service, controller, UI) with non-trivial logic for multi-format response handling, streaming content assembly, and error normalization. The service/sensitive.go module in particular adds several new exported types and methods requiring careful review of the detection and formatting logic.

Poem

🐰 A whisper of caution flows through the stream,
Filtering words that dare disrupt the dream,
When responses arrive, we peek and we peek,
Catching the shadows that some dare not speak—
With errors replaced and errors reborn,
Your chat remains safe, your users forewarned!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 11.11% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The PR title "针对 chat 的 4 类接口和演练场接口回复的敏感词过滤" (Sensitive word filtering for responses from 4 types of chat interfaces and drill ground interfaces) is directly related to the main change in this changeset. The modifications across all files—including the response writer implementation in controller/relay.go, the filtering logic in service/sensitive.go, configuration handling in model/option.go and setting/sensitive.go, and UI controls in the web components—collectively implement exactly what the title describes: sensitive word filtering applied to responses from specified chat completion endpoints. The title accurately captures the primary purpose and main functionality without unnecessary verbosity or vague language.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (3)
controller/relay.go (1)

199-203: Avoid double-response side effects when writer handled the error

If writer detected a sensitive response, it already wrote the error SSE/JSON. You set newAPIError and let the defer try to write again (later swallowed). Consider short-circuiting after assigning newAPIError to skip further response writes to keep control flow clearer and avoid confusing logs.

service/sensitive.go (2)

4-17: Import alias collision reduces clarity

Using marshalCommon for common plus importing relay/common as common is confusing. Rename relay/common to relaycommon to avoid shadowing.


122-165: SSE path: consider flushing after writing error and remove unused fields

  • After writing the SSE error, call Flush() if supported to push it immediately.
  • SensitiveWordFilterResponseWriter.Body is unused; drop it to reduce allocation.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 43f2a8a and 98e77c2.

📒 Files selected for processing (6)
  • controller/relay.go (3 hunks)
  • model/option.go (2 hunks)
  • service/sensitive.go (2 hunks)
  • setting/sensitive.go (2 hunks)
  • web/src/components/settings/OperationSetting.jsx (1 hunks)
  • web/src/pages/Setting/Operation/SettingsSensitiveWords.jsx (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
model/option.go (2)
common/constants.go (1)
  • OptionMap (37-37)
setting/sensitive.go (1)
  • CheckSensitiveOnCompletionEnabled (8-8)
web/src/pages/Setting/Operation/SettingsSensitiveWords.jsx (1)
web/src/components/settings/OperationSetting.jsx (1)
  • inputs (32-75)
controller/relay.go (3)
service/sensitive.go (2)
  • SensitiveWordFilterResponseWriter (100-112)
  • IsChatCompletionEndpoint (92-98)
common/utils.go (1)
  • MessageWithRequestId (289-291)
setting/sensitive.go (1)
  • ShouldCheckCompletionSensitive (41-43)
service/sensitive.go (10)
dto/request_common.go (1)
  • Request (8-12)
relay/common/relay_info.go (1)
  • RelayInfo (75-122)
types/error.go (5)
  • NewAPIError (87-95)
  • NewOpenAIError (226-249)
  • ErrorCodeBadResponseBody (72-72)
  • NewError (204-224)
  • ErrorCodeSensitiveWordsDetected (40-40)
types/relay_format.go (5)
  • RelayFormat (3-3)
  • RelayFormatClaude (7-7)
  • RelayFormatOpenAI (6-6)
  • RelayFormatGemini (8-8)
  • RelayFormatOpenAIResponses (9-9)
common/json.go (3)
  • Marshal (21-23)
  • Unmarshal (9-11)
  • UnmarshalJsonStr (13-15)
dto/openai_response.go (4)
  • OpenAITextResponse (39-47)
  • OpenAIResponsesResponse (261-284)
  • ChatCompletionsStreamResponse (141-149)
  • ResponsesStreamResponse (362-367)
logger/logger.go (2)
  • LogError (65-67)
  • LogWarn (61-63)
dto/openai_request.go (1)
  • Message (278-289)
dto/gemini.go (1)
  • GeminiChatResponse (301-305)
dto/claude.go (1)
  • ClaudeResponse (447-461)
🔇 Additional comments (5)
model/option.go (1)

138-138: Flag wiring looks correct

Option map init and update correctly propagate CheckSensitiveOnCompletionEnabled.

Also applies to: 282-283

web/src/components/settings/OperationSetting.jsx (1)

61-61: LGTM

Defaulting CheckSensitiveOnCompletionEnabled to false matches backend gating.

setting/sensitive.go (1)

8-8: LGTM

Flag and helper compose correctly with existing CheckSensitiveEnabled.

Also applies to: 41-43

web/src/pages/Setting/Operation/SettingsSensitiveWords.jsx (1)

37-37: LGTM

New toggle is wired consistently with existing settings form behavior.

Also applies to: 129-143

controller/relay.go (1)

131-141: Writer gating looks good, but ensure prompt precheck builds a non-nil error with 4xx

Gating on ShouldCheckCompletionSensitive() and IsChatCompletionEndpoint is correct. Separately, the earlier prompt check constructs newAPIError via types.NewError(err, ...); err is nil there, yielding empty messages/500. Use a concrete error and 400-class status.

Apply this diff near the prompt-sensitive check:

-        newAPIError = types.NewError(err, types.ErrorCodeSensitiveWordsDetected)
+        newAPIError = types.NewOpenAIError(
+            fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")),
+            types.ErrorCodeSensitiveWordsDetected,
+            http.StatusBadRequest,
+        )

Likely an incorrect or invalid review comment.

Comment on lines +85 to 91
var writer *service.SensitiveWordFilterResponseWriter
defer func() {
if writer != nil && writer.GetNewAPIError() != nil {
writer.GetNewAPIError().SetMessage(common.MessageWithRequestId(writer.GetNewAPIError().Error(), requestId))
}

if newAPIError != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Request ID tagging happens after the writer already wrote the response

writer.GetNewAPIError().SetMessage(...) in defer is too late; the client has already received the error without the request id. Move request-id augmentation into the writer at the point you construct the NewAPIError so the emitted body includes it.

🤖 Prompt for AI Agents
In controller/relay.go around lines 85 to 91: the code appends the request-id to
the NewAPIError in a defer after the writer may have already flushed a response,
so clients receive the error body without the request-id; instead, modify the
place(s) where the writer constructs/returns a NewAPIError (inside
service.SensitiveWordFilterResponseWriter) to immediately include or wrap the
error message/payload with common.MessageWithRequestId(requestErr.Error(),
requestId) (or set the request-id field on the error object) at creation time so
the writer emits the augmented body, and remove the late defer-only augmentation
(or keep it as a fallback).

Comment on lines +179 to +236
func (w *SensitiveWordFilterResponseWriter) processNonStreamResponse(bodyBytes []byte) []byte {
var contents string
if w.Info.RelayFormat == types.RelayFormatOpenAI {
var simpleResponse dto.OpenAITextResponse
if err := marshalCommon.Unmarshal(bodyBytes, &simpleResponse); err != nil {
logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error())
return bodyBytes
}
for _, choice := range simpleResponse.Choices {
value, isString := choice.Message.Content.(string)
if isString {
contents += value
}
}
} else if w.Info.RelayFormat == types.RelayFormatGemini {
var geminiResponse dto.GeminiChatResponse
if err := marshalCommon.Unmarshal(bodyBytes, &geminiResponse); err != nil {
logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error())
return bodyBytes
}
for _, candidate := range geminiResponse.Candidates {
for _, part := range candidate.Content.Parts {
contents += part.Text
}
}
} else if w.Info.RelayFormat == types.RelayFormatClaude {
var claudeResponse dto.ClaudeResponse
if err := marshalCommon.Unmarshal(bodyBytes, &claudeResponse); err != nil {
logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error())
return bodyBytes
}
for _, content := range claudeResponse.Content {
if content.Text != nil {
contents += *content.Text
}
}
} else if w.Info.RelayFormat == types.RelayFormatOpenAIResponses {
var responsesResponse dto.OpenAIResponsesResponse
if err := marshalCommon.Unmarshal(bodyBytes, &responsesResponse); err != nil {
logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error())
return bodyBytes
}
for _, output := range responsesResponse.Output {
for _, content := range output.Content {
contents += content.Text
}
}
}
if contents != "" {
contains, words := CheckSensitiveText(contents)
if contains {
logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")))
w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected)
return bodyBytes
}
}
return bodyBytes
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Non-stream error should be 400 and include request id

Currently uses types.NewError(...) defaulting to 500 and lacks request id in the emitted body. Build an OpenAI error with 400 and tag the message with the request id.

Apply this diff:

-            w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected)
+            w.newAPIError = types.NewOpenAIError(
+                fmt.Errorf(marshalCommon.MessageWithRequestId(
+                    fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")),
+                    w.Context.GetString(marshalCommon.RequestIdKey),
+                )),
+                types.ErrorCodeSensitiveWordsDetected,
+                http.StatusBadRequest,
+            )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (w *SensitiveWordFilterResponseWriter) processNonStreamResponse(bodyBytes []byte) []byte {
var contents string
if w.Info.RelayFormat == types.RelayFormatOpenAI {
var simpleResponse dto.OpenAITextResponse
if err := marshalCommon.Unmarshal(bodyBytes, &simpleResponse); err != nil {
logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error())
return bodyBytes
}
for _, choice := range simpleResponse.Choices {
value, isString := choice.Message.Content.(string)
if isString {
contents += value
}
}
} else if w.Info.RelayFormat == types.RelayFormatGemini {
var geminiResponse dto.GeminiChatResponse
if err := marshalCommon.Unmarshal(bodyBytes, &geminiResponse); err != nil {
logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error())
return bodyBytes
}
for _, candidate := range geminiResponse.Candidates {
for _, part := range candidate.Content.Parts {
contents += part.Text
}
}
} else if w.Info.RelayFormat == types.RelayFormatClaude {
var claudeResponse dto.ClaudeResponse
if err := marshalCommon.Unmarshal(bodyBytes, &claudeResponse); err != nil {
logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error())
return bodyBytes
}
for _, content := range claudeResponse.Content {
if content.Text != nil {
contents += *content.Text
}
}
} else if w.Info.RelayFormat == types.RelayFormatOpenAIResponses {
var responsesResponse dto.OpenAIResponsesResponse
if err := marshalCommon.Unmarshal(bodyBytes, &responsesResponse); err != nil {
logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error())
return bodyBytes
}
for _, output := range responsesResponse.Output {
for _, content := range output.Content {
contents += content.Text
}
}
}
if contents != "" {
contains, words := CheckSensitiveText(contents)
if contains {
logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")))
w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected)
return bodyBytes
}
}
return bodyBytes
}
func (w *SensitiveWordFilterResponseWriter) processNonStreamResponse(bodyBytes []byte) []byte {
var contents string
if w.Info.RelayFormat == types.RelayFormatOpenAI {
var simpleResponse dto.OpenAITextResponse
if err := marshalCommon.Unmarshal(bodyBytes, &simpleResponse); err != nil {
logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error())
return bodyBytes
}
for _, choice := range simpleResponse.Choices {
value, isString := choice.Message.Content.(string)
if isString {
contents += value
}
}
} else if w.Info.RelayFormat == types.RelayFormatGemini {
var geminiResponse dto.GeminiChatResponse
if err := marshalCommon.Unmarshal(bodyBytes, &geminiResponse); err != nil {
logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error())
return bodyBytes
}
for _, candidate := range geminiResponse.Candidates {
for _, part := range candidate.Content.Parts {
contents += part.Text
}
}
} else if w.Info.RelayFormat == types.RelayFormatClaude {
var claudeResponse dto.ClaudeResponse
if err := marshalCommon.Unmarshal(bodyBytes, &claudeResponse); err != nil {
logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error())
return bodyBytes
}
for _, content := range claudeResponse.Content {
if content.Text != nil {
contents += *content.Text
}
}
} else if w.Info.RelayFormat == types.RelayFormatOpenAIResponses {
var responsesResponse dto.OpenAIResponsesResponse
if err := marshalCommon.Unmarshal(bodyBytes, &responsesResponse); err != nil {
logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error())
return bodyBytes
}
for _, output := range responsesResponse.Output {
for _, content := range output.Content {
contents += content.Text
}
}
}
if contents != "" {
contains, words := CheckSensitiveText(contents)
if contains {
logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")))
w.newAPIError = types.NewOpenAIError(
fmt.Errorf(marshalCommon.MessageWithRequestId(
fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")),
w.Context.GetString(marshalCommon.RequestIdKey),
)),
types.ErrorCodeSensitiveWordsDetected,
http.StatusBadRequest,
)
return bodyBytes
}
}
return bodyBytes
}
🤖 Prompt for AI Agents
In service/sensitive.go around lines 179 to 236, when sensitive words are
detected the code sets w.newAPIError using types.NewError (defaults to 500) and
does not include the request id; replace that with an OpenAI-style error with
HTTP 400 and include the request id in the message. Concretely, build the error
using types.NewOpenAIError(..., types.ErrorCodeSensitiveWordsDetected,
http.StatusBadRequest) and set w.newAPIError to that error, composing the error
message to include the joined sensitive words and the request id from w.Info
(e.g. "user sensitive words detected: <words> (request_id=<id>)").

Comment on lines +239 to +271
func (w *SensitiveWordFilterResponseWriter) processStreamResponse(bodyBytes []byte) []byte {
scanner := bufio.NewScanner(bytes.NewReader(bodyBytes))
for scanner.Scan() {
data := scanner.Text()
if len(data) < 6 {
continue
}
if data[:5] != "data:" && data[:6] != "[DONE]" {
continue
}
data = data[5:]
data = strings.TrimLeft(data, " ")
data = strings.TrimSuffix(data, "\r")
if !strings.HasPrefix(data, "[DONE]") {
content, ok := w.parserLineChatInfo(data)
if !ok {
continue
}
w.appendStreamRespContent(content)
}
}
if w.streamContent != "" {
contains, words := CheckSensitiveText(w.streamContent)
if contains {
logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")))
w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected)
// 将最后一条
w.errorStreamMessageTemplate = strings.Replace(w.errorStreamMessageTemplate, SensitiveWordPlaceholders, strings.Join(words, ", "), 1)
return bodyBytes
}
}
return bodyBytes
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Streaming filter can leak sensitive content before detection

Because prior chunks are already forwarded, sensitive words may be sent before detection triggers. If strict filtering is required, buffer and only flush “safe” prefixes (e.g., maintain a sliding window up to max sensitive term length), or terminate the stream before emitting offending tokens.

🤖 Prompt for AI Agents
In service/sensitive.go around lines 239 to 271, the current streaming logic can
forward earlier chunks before sensitive content is detected; to fix it, change
the flow to buffer incoming stream chunks instead of immediately emitting them,
maintain a sliding window buffer sized to the longest sensitive term (or a
configurable max lookahead), scan each new chunk appended to the buffer for
sensitive matches, flush only the prefix of the buffer that is guaranteed free
of any partial sensitive token (retain the tail equal to the max sensitive
length for future checks), and if a match is found terminate the stream (or
replace the offending suffix with the error stream template) and log/set the
error; ensure the function returns/outputs the safe bytes only and preserves the
leftover partial-window for subsequent processing.

⚠️ Potential issue | 🔴 Critical

Incorrect handling of “[DONE]” lines; may try to parse “]” as JSON

You always slice data = data[5:], even when the line is bare “[DONE]”. This corrupts the token and triggers spurious “bad_response_body” logs.

Apply this diff:

-        if data[:5] != "data:" && data[:6] != "[DONE]" {
-            continue
-        }
-        data = data[5:]
-        data = strings.TrimLeft(data, " ")
-        data = strings.TrimSuffix(data, "\r")
-        if !strings.HasPrefix(data, "[DONE]") {
-            content, ok := w.parserLineChatInfo(data)
-            if !ok {
-                continue
-            }
-            w.appendStreamRespContent(content)
-        }
+        // Only strip "data:" prefix when present; handle bare "[DONE]" safely.
+        if strings.HasPrefix(data, "data:") {
+            data = strings.TrimSpace(data[5:])
+            if strings.HasPrefix(data, "[DONE]") {
+                continue
+            }
+            if content, ok := w.parserLineChatInfo(data); ok {
+                w.appendStreamRespContent(content)
+            }
+            continue
+        }
+        if strings.HasPrefix(data, "[DONE]") {
+            continue
+        }

Additionally, consider logging scanner.Err() if present after Scan().

Comment on lines +260 to +267
if w.streamContent != "" {
contains, words := CheckSensitiveText(w.streamContent)
if contains {
logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")))
w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected)
// 将最后一条
w.errorStreamMessageTemplate = strings.Replace(w.errorStreamMessageTemplate, SensitiveWordPlaceholders, strings.Join(words, ", "), 1)
return bodyBytes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Stream error should be 400 and include request id; current 500 lacks req id

Same as non-stream: build the error with 400 and embed request id at creation so the SSE error body has it.

Apply this diff:

-            w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected)
+            w.newAPIError = types.NewOpenAIError(
+                fmt.Errorf(marshalCommon.MessageWithRequestId(
+                    fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")),
+                    w.Context.GetString(marshalCommon.RequestIdKey),
+                )),
+                types.ErrorCodeSensitiveWordsDetected,
+                http.StatusBadRequest,
+            )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if w.streamContent != "" {
contains, words := CheckSensitiveText(w.streamContent)
if contains {
logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")))
w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected)
// 将最后一条
w.errorStreamMessageTemplate = strings.Replace(w.errorStreamMessageTemplate, SensitiveWordPlaceholders, strings.Join(words, ", "), 1)
return bodyBytes
if w.streamContent != "" {
contains, words := CheckSensitiveText(w.streamContent)
if contains {
logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")))
w.newAPIError = types.NewOpenAIError(
fmt.Errorf(marshalCommon.MessageWithRequestId(
fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")),
w.Context.GetString(marshalCommon.RequestIdKey),
)),
types.ErrorCodeSensitiveWordsDetected,
http.StatusBadRequest,
)
// 将最后一条
w.errorStreamMessageTemplate = strings.Replace(w.errorStreamMessageTemplate, SensitiveWordPlaceholders, strings.Join(words, ", "), 1)
return bodyBytes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant