-
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
针对 chat 的 4 类接口和演练场接口回复的敏感词过滤 #2058
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughIntroduces sensitive word filtering for LLM chat completion responses by adding a configuration flag, implementing a response writer to intercept and check content, detecting violations in both streamed and non-streamed formats, and returning appropriate errors when sensitive words are found. Changes
Sequence DiagramsequenceDiagram
participant Client
participant Relay as controller/relay.go
participant Writer as SensitiveWordFilterResponseWriter
participant LLM as Downstream API
participant Filter as service/sensitive
Client->>Relay: POST /chat/completions
alt CheckSensitiveOnCompletionEnabled && IsChatCompletionEndpoint
Relay->>Writer: Create SensitiveWordFilterResponseWriter
end
Relay->>LLM: Forward request
LLM-->>Writer: Response (streamed or buffered)
Writer->>Filter: Intercept & check for sensitive words
alt Sensitive Words Detected
Writer->>Filter: Create NewAPIError (ErrorCodeSensitiveWordsDetected)
Filter-->>Writer: Error with end-user message
Writer-->>Client: Error response (SSE or JSON format)
else No Violations
Writer-->>Client: Original response
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes The changes introduce a new filtering subsystem spanning multiple layers (configuration, service, controller, UI) with non-trivial logic for multi-format response handling, streaming content assembly, and error normalization. The Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (3)
controller/relay.go (1)
199-203: Avoid double-response side effects when writer handled the errorIf writer detected a sensitive response, it already wrote the error SSE/JSON. You set newAPIError and let the defer try to write again (later swallowed). Consider short-circuiting after assigning newAPIError to skip further response writes to keep control flow clearer and avoid confusing logs.
service/sensitive.go (2)
4-17: Import alias collision reduces clarityUsing marshalCommon for common plus importing relay/common as common is confusing. Rename relay/common to relaycommon to avoid shadowing.
122-165: SSE path: consider flushing after writing error and remove unused fields
- After writing the SSE error, call Flush() if supported to push it immediately.
- SensitiveWordFilterResponseWriter.Body is unused; drop it to reduce allocation.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
controller/relay.go(3 hunks)model/option.go(2 hunks)service/sensitive.go(2 hunks)setting/sensitive.go(2 hunks)web/src/components/settings/OperationSetting.jsx(1 hunks)web/src/pages/Setting/Operation/SettingsSensitiveWords.jsx(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
model/option.go (2)
common/constants.go (1)
OptionMap(37-37)setting/sensitive.go (1)
CheckSensitiveOnCompletionEnabled(8-8)
web/src/pages/Setting/Operation/SettingsSensitiveWords.jsx (1)
web/src/components/settings/OperationSetting.jsx (1)
inputs(32-75)
controller/relay.go (3)
service/sensitive.go (2)
SensitiveWordFilterResponseWriter(100-112)IsChatCompletionEndpoint(92-98)common/utils.go (1)
MessageWithRequestId(289-291)setting/sensitive.go (1)
ShouldCheckCompletionSensitive(41-43)
service/sensitive.go (10)
dto/request_common.go (1)
Request(8-12)relay/common/relay_info.go (1)
RelayInfo(75-122)types/error.go (5)
NewAPIError(87-95)NewOpenAIError(226-249)ErrorCodeBadResponseBody(72-72)NewError(204-224)ErrorCodeSensitiveWordsDetected(40-40)types/relay_format.go (5)
RelayFormat(3-3)RelayFormatClaude(7-7)RelayFormatOpenAI(6-6)RelayFormatGemini(8-8)RelayFormatOpenAIResponses(9-9)common/json.go (3)
Marshal(21-23)Unmarshal(9-11)UnmarshalJsonStr(13-15)dto/openai_response.go (4)
OpenAITextResponse(39-47)OpenAIResponsesResponse(261-284)ChatCompletionsStreamResponse(141-149)ResponsesStreamResponse(362-367)logger/logger.go (2)
LogError(65-67)LogWarn(61-63)dto/openai_request.go (1)
Message(278-289)dto/gemini.go (1)
GeminiChatResponse(301-305)dto/claude.go (1)
ClaudeResponse(447-461)
🔇 Additional comments (5)
model/option.go (1)
138-138: Flag wiring looks correctOption map init and update correctly propagate CheckSensitiveOnCompletionEnabled.
Also applies to: 282-283
web/src/components/settings/OperationSetting.jsx (1)
61-61: LGTMDefaulting CheckSensitiveOnCompletionEnabled to false matches backend gating.
setting/sensitive.go (1)
8-8: LGTMFlag and helper compose correctly with existing CheckSensitiveEnabled.
Also applies to: 41-43
web/src/pages/Setting/Operation/SettingsSensitiveWords.jsx (1)
37-37: LGTMNew toggle is wired consistently with existing settings form behavior.
Also applies to: 129-143
controller/relay.go (1)
131-141: Writer gating looks good, but ensure prompt precheck builds a non-nil error with 4xxGating on ShouldCheckCompletionSensitive() and IsChatCompletionEndpoint is correct. Separately, the earlier prompt check constructs newAPIError via types.NewError(err, ...); err is nil there, yielding empty messages/500. Use a concrete error and 400-class status.
Apply this diff near the prompt-sensitive check:
- newAPIError = types.NewError(err, types.ErrorCodeSensitiveWordsDetected) + newAPIError = types.NewOpenAIError( + fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), + types.ErrorCodeSensitiveWordsDetected, + http.StatusBadRequest, + )Likely an incorrect or invalid review comment.
| var writer *service.SensitiveWordFilterResponseWriter | ||
| defer func() { | ||
| if writer != nil && writer.GetNewAPIError() != nil { | ||
| writer.GetNewAPIError().SetMessage(common.MessageWithRequestId(writer.GetNewAPIError().Error(), requestId)) | ||
| } | ||
|
|
||
| if newAPIError != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Request ID tagging happens after the writer already wrote the response
writer.GetNewAPIError().SetMessage(...) in defer is too late; the client has already received the error without the request id. Move request-id augmentation into the writer at the point you construct the NewAPIError so the emitted body includes it.
🤖 Prompt for AI Agents
In controller/relay.go around lines 85 to 91: the code appends the request-id to
the NewAPIError in a defer after the writer may have already flushed a response,
so clients receive the error body without the request-id; instead, modify the
place(s) where the writer constructs/returns a NewAPIError (inside
service.SensitiveWordFilterResponseWriter) to immediately include or wrap the
error message/payload with common.MessageWithRequestId(requestErr.Error(),
requestId) (or set the request-id field on the error object) at creation time so
the writer emits the augmented body, and remove the late defer-only augmentation
(or keep it as a fallback).
| func (w *SensitiveWordFilterResponseWriter) processNonStreamResponse(bodyBytes []byte) []byte { | ||
| var contents string | ||
| if w.Info.RelayFormat == types.RelayFormatOpenAI { | ||
| var simpleResponse dto.OpenAITextResponse | ||
| if err := marshalCommon.Unmarshal(bodyBytes, &simpleResponse); err != nil { | ||
| logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error()) | ||
| return bodyBytes | ||
| } | ||
| for _, choice := range simpleResponse.Choices { | ||
| value, isString := choice.Message.Content.(string) | ||
| if isString { | ||
| contents += value | ||
| } | ||
| } | ||
| } else if w.Info.RelayFormat == types.RelayFormatGemini { | ||
| var geminiResponse dto.GeminiChatResponse | ||
| if err := marshalCommon.Unmarshal(bodyBytes, &geminiResponse); err != nil { | ||
| logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error()) | ||
| return bodyBytes | ||
| } | ||
| for _, candidate := range geminiResponse.Candidates { | ||
| for _, part := range candidate.Content.Parts { | ||
| contents += part.Text | ||
| } | ||
| } | ||
| } else if w.Info.RelayFormat == types.RelayFormatClaude { | ||
| var claudeResponse dto.ClaudeResponse | ||
| if err := marshalCommon.Unmarshal(bodyBytes, &claudeResponse); err != nil { | ||
| logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error()) | ||
| return bodyBytes | ||
| } | ||
| for _, content := range claudeResponse.Content { | ||
| if content.Text != nil { | ||
| contents += *content.Text | ||
| } | ||
| } | ||
| } else if w.Info.RelayFormat == types.RelayFormatOpenAIResponses { | ||
| var responsesResponse dto.OpenAIResponsesResponse | ||
| if err := marshalCommon.Unmarshal(bodyBytes, &responsesResponse); err != nil { | ||
| logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error()) | ||
| return bodyBytes | ||
| } | ||
| for _, output := range responsesResponse.Output { | ||
| for _, content := range output.Content { | ||
| contents += content.Text | ||
| } | ||
| } | ||
| } | ||
| if contents != "" { | ||
| contains, words := CheckSensitiveText(contents) | ||
| if contains { | ||
| logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", "))) | ||
| w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected) | ||
| return bodyBytes | ||
| } | ||
| } | ||
| return bodyBytes | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-stream error should be 400 and include request id
Currently uses types.NewError(...) defaulting to 500 and lacks request id in the emitted body. Build an OpenAI error with 400 and tag the message with the request id.
Apply this diff:
- w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected)
+ w.newAPIError = types.NewOpenAIError(
+ fmt.Errorf(marshalCommon.MessageWithRequestId(
+ fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")),
+ w.Context.GetString(marshalCommon.RequestIdKey),
+ )),
+ types.ErrorCodeSensitiveWordsDetected,
+ http.StatusBadRequest,
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (w *SensitiveWordFilterResponseWriter) processNonStreamResponse(bodyBytes []byte) []byte { | |
| var contents string | |
| if w.Info.RelayFormat == types.RelayFormatOpenAI { | |
| var simpleResponse dto.OpenAITextResponse | |
| if err := marshalCommon.Unmarshal(bodyBytes, &simpleResponse); err != nil { | |
| logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error()) | |
| return bodyBytes | |
| } | |
| for _, choice := range simpleResponse.Choices { | |
| value, isString := choice.Message.Content.(string) | |
| if isString { | |
| contents += value | |
| } | |
| } | |
| } else if w.Info.RelayFormat == types.RelayFormatGemini { | |
| var geminiResponse dto.GeminiChatResponse | |
| if err := marshalCommon.Unmarshal(bodyBytes, &geminiResponse); err != nil { | |
| logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error()) | |
| return bodyBytes | |
| } | |
| for _, candidate := range geminiResponse.Candidates { | |
| for _, part := range candidate.Content.Parts { | |
| contents += part.Text | |
| } | |
| } | |
| } else if w.Info.RelayFormat == types.RelayFormatClaude { | |
| var claudeResponse dto.ClaudeResponse | |
| if err := marshalCommon.Unmarshal(bodyBytes, &claudeResponse); err != nil { | |
| logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error()) | |
| return bodyBytes | |
| } | |
| for _, content := range claudeResponse.Content { | |
| if content.Text != nil { | |
| contents += *content.Text | |
| } | |
| } | |
| } else if w.Info.RelayFormat == types.RelayFormatOpenAIResponses { | |
| var responsesResponse dto.OpenAIResponsesResponse | |
| if err := marshalCommon.Unmarshal(bodyBytes, &responsesResponse); err != nil { | |
| logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error()) | |
| return bodyBytes | |
| } | |
| for _, output := range responsesResponse.Output { | |
| for _, content := range output.Content { | |
| contents += content.Text | |
| } | |
| } | |
| } | |
| if contents != "" { | |
| contains, words := CheckSensitiveText(contents) | |
| if contains { | |
| logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", "))) | |
| w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected) | |
| return bodyBytes | |
| } | |
| } | |
| return bodyBytes | |
| } | |
| func (w *SensitiveWordFilterResponseWriter) processNonStreamResponse(bodyBytes []byte) []byte { | |
| var contents string | |
| if w.Info.RelayFormat == types.RelayFormatOpenAI { | |
| var simpleResponse dto.OpenAITextResponse | |
| if err := marshalCommon.Unmarshal(bodyBytes, &simpleResponse); err != nil { | |
| logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error()) | |
| return bodyBytes | |
| } | |
| for _, choice := range simpleResponse.Choices { | |
| value, isString := choice.Message.Content.(string) | |
| if isString { | |
| contents += value | |
| } | |
| } | |
| } else if w.Info.RelayFormat == types.RelayFormatGemini { | |
| var geminiResponse dto.GeminiChatResponse | |
| if err := marshalCommon.Unmarshal(bodyBytes, &geminiResponse); err != nil { | |
| logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error()) | |
| return bodyBytes | |
| } | |
| for _, candidate := range geminiResponse.Candidates { | |
| for _, part := range candidate.Content.Parts { | |
| contents += part.Text | |
| } | |
| } | |
| } else if w.Info.RelayFormat == types.RelayFormatClaude { | |
| var claudeResponse dto.ClaudeResponse | |
| if err := marshalCommon.Unmarshal(bodyBytes, &claudeResponse); err != nil { | |
| logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error()) | |
| return bodyBytes | |
| } | |
| for _, content := range claudeResponse.Content { | |
| if content.Text != nil { | |
| contents += *content.Text | |
| } | |
| } | |
| } else if w.Info.RelayFormat == types.RelayFormatOpenAIResponses { | |
| var responsesResponse dto.OpenAIResponsesResponse | |
| if err := marshalCommon.Unmarshal(bodyBytes, &responsesResponse); err != nil { | |
| logger.LogError(w.Context, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError).Error()) | |
| return bodyBytes | |
| } | |
| for _, output := range responsesResponse.Output { | |
| for _, content := range output.Content { | |
| contents += content.Text | |
| } | |
| } | |
| } | |
| if contents != "" { | |
| contains, words := CheckSensitiveText(contents) | |
| if contains { | |
| logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", "))) | |
| w.newAPIError = types.NewOpenAIError( | |
| fmt.Errorf(marshalCommon.MessageWithRequestId( | |
| fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")), | |
| w.Context.GetString(marshalCommon.RequestIdKey), | |
| )), | |
| types.ErrorCodeSensitiveWordsDetected, | |
| http.StatusBadRequest, | |
| ) | |
| return bodyBytes | |
| } | |
| } | |
| return bodyBytes | |
| } |
🤖 Prompt for AI Agents
In service/sensitive.go around lines 179 to 236, when sensitive words are
detected the code sets w.newAPIError using types.NewError (defaults to 500) and
does not include the request id; replace that with an OpenAI-style error with
HTTP 400 and include the request id in the message. Concretely, build the error
using types.NewOpenAIError(..., types.ErrorCodeSensitiveWordsDetected,
http.StatusBadRequest) and set w.newAPIError to that error, composing the error
message to include the joined sensitive words and the request id from w.Info
(e.g. "user sensitive words detected: <words> (request_id=<id>)").
| func (w *SensitiveWordFilterResponseWriter) processStreamResponse(bodyBytes []byte) []byte { | ||
| scanner := bufio.NewScanner(bytes.NewReader(bodyBytes)) | ||
| for scanner.Scan() { | ||
| data := scanner.Text() | ||
| if len(data) < 6 { | ||
| continue | ||
| } | ||
| if data[:5] != "data:" && data[:6] != "[DONE]" { | ||
| continue | ||
| } | ||
| data = data[5:] | ||
| data = strings.TrimLeft(data, " ") | ||
| data = strings.TrimSuffix(data, "\r") | ||
| if !strings.HasPrefix(data, "[DONE]") { | ||
| content, ok := w.parserLineChatInfo(data) | ||
| if !ok { | ||
| continue | ||
| } | ||
| w.appendStreamRespContent(content) | ||
| } | ||
| } | ||
| if w.streamContent != "" { | ||
| contains, words := CheckSensitiveText(w.streamContent) | ||
| if contains { | ||
| logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", "))) | ||
| w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected) | ||
| // 将最后一条 | ||
| w.errorStreamMessageTemplate = strings.Replace(w.errorStreamMessageTemplate, SensitiveWordPlaceholders, strings.Join(words, ", "), 1) | ||
| return bodyBytes | ||
| } | ||
| } | ||
| return bodyBytes | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Streaming filter can leak sensitive content before detection
Because prior chunks are already forwarded, sensitive words may be sent before detection triggers. If strict filtering is required, buffer and only flush “safe” prefixes (e.g., maintain a sliding window up to max sensitive term length), or terminate the stream before emitting offending tokens.
🤖 Prompt for AI Agents
In service/sensitive.go around lines 239 to 271, the current streaming logic can
forward earlier chunks before sensitive content is detected; to fix it, change
the flow to buffer incoming stream chunks instead of immediately emitting them,
maintain a sliding window buffer sized to the longest sensitive term (or a
configurable max lookahead), scan each new chunk appended to the buffer for
sensitive matches, flush only the prefix of the buffer that is guaranteed free
of any partial sensitive token (retain the tail equal to the max sensitive
length for future checks), and if a match is found terminate the stream (or
replace the offending suffix with the error stream template) and log/set the
error; ensure the function returns/outputs the safe bytes only and preserves the
leftover partial-window for subsequent processing.
Incorrect handling of “[DONE]” lines; may try to parse “]” as JSON
You always slice data = data[5:], even when the line is bare “[DONE]”. This corrupts the token and triggers spurious “bad_response_body” logs.
Apply this diff:
- if data[:5] != "data:" && data[:6] != "[DONE]" {
- continue
- }
- data = data[5:]
- data = strings.TrimLeft(data, " ")
- data = strings.TrimSuffix(data, "\r")
- if !strings.HasPrefix(data, "[DONE]") {
- content, ok := w.parserLineChatInfo(data)
- if !ok {
- continue
- }
- w.appendStreamRespContent(content)
- }
+ // Only strip "data:" prefix when present; handle bare "[DONE]" safely.
+ if strings.HasPrefix(data, "data:") {
+ data = strings.TrimSpace(data[5:])
+ if strings.HasPrefix(data, "[DONE]") {
+ continue
+ }
+ if content, ok := w.parserLineChatInfo(data); ok {
+ w.appendStreamRespContent(content)
+ }
+ continue
+ }
+ if strings.HasPrefix(data, "[DONE]") {
+ continue
+ }Additionally, consider logging scanner.Err() if present after Scan().
| if w.streamContent != "" { | ||
| contains, words := CheckSensitiveText(w.streamContent) | ||
| if contains { | ||
| logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", "))) | ||
| w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected) | ||
| // 将最后一条 | ||
| w.errorStreamMessageTemplate = strings.Replace(w.errorStreamMessageTemplate, SensitiveWordPlaceholders, strings.Join(words, ", "), 1) | ||
| return bodyBytes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stream error should be 400 and include request id; current 500 lacks req id
Same as non-stream: build the error with 400 and embed request id at creation so the SSE error body has it.
Apply this diff:
- w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected)
+ w.newAPIError = types.NewOpenAIError(
+ fmt.Errorf(marshalCommon.MessageWithRequestId(
+ fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")),
+ w.Context.GetString(marshalCommon.RequestIdKey),
+ )),
+ types.ErrorCodeSensitiveWordsDetected,
+ http.StatusBadRequest,
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if w.streamContent != "" { | |
| contains, words := CheckSensitiveText(w.streamContent) | |
| if contains { | |
| logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", "))) | |
| w.newAPIError = types.NewError(fmt.Errorf("user sensitive words detected: %s", strings.Join(words, ", ")), types.ErrorCodeSensitiveWordsDetected) | |
| // 将最后一条 | |
| w.errorStreamMessageTemplate = strings.Replace(w.errorStreamMessageTemplate, SensitiveWordPlaceholders, strings.Join(words, ", "), 1) | |
| return bodyBytes | |
| if w.streamContent != "" { | |
| contains, words := CheckSensitiveText(w.streamContent) | |
| if contains { | |
| logger.LogWarn(w.Context, fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", "))) | |
| w.newAPIError = types.NewOpenAIError( | |
| fmt.Errorf(marshalCommon.MessageWithRequestId( | |
| fmt.Sprintf("user sensitive words detected: %s", strings.Join(words, ", ")), | |
| w.Context.GetString(marshalCommon.RequestIdKey), | |
| )), | |
| types.ErrorCodeSensitiveWordsDetected, | |
| http.StatusBadRequest, | |
| ) | |
| // 将最后一条 | |
| w.errorStreamMessageTemplate = strings.Replace(w.errorStreamMessageTemplate, SensitiveWordPlaceholders, strings.Join(words, ", "), 1) | |
| return bodyBytes |
Summary by CodeRabbit
Release Notes
New Features
Configuration