Skip to content

Conversation

@stbenjam
Copy link
Member

@stbenjam stbenjam commented Nov 5, 2025

Summary by CodeRabbit

Release Notes

  • New Features

    • Custom prompt management system with dedicated UI for creating, editing, and organizing prompts
    • YAML-based prompt editor with syntax highlighting for advanced configuration
    • Import and export prompts as YAML files for backup and sharing
    • AI-assisted prompt generation workflow to streamline creation
    • Visual indicators to distinguish between local and server-based prompts
  • Improvements

    • Increased maximum chat message length from 10,000 to 100,000 characters
  • Dependencies

    • Added js-yaml, nunjucks, react-syntax-highlighter libraries for prompt processing and editing functionality

- Browser-persisted local prompts stored in IndexedDB
- AI-assisted prompt creation with optional chat history context
- Dual-view editor (YAML and Form modes) with version history
- AI refinement capability for iterating on prompts
- Dedicated Prompt Manager modal with list/detail layout
- Visual indicators to distinguish local vs server prompts
- Client-side template rendering using Nunjucks
- Export/import functionality for sharing prompts
- Increased message character limit to 100K
@openshift-ci-robot
Copy link

Pipeline controller notification
This repository is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. Review these jobs and use /test <job> to manually trigger optional jobs most likely to be impacted by the proposed changes.

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 5, 2025
@openshift-ci openshift-ci bot requested review from deads2k and sosiouxme November 5, 2025 19:07
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 5, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: stbenjam

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 5, 2025

Walkthrough

A custom prompt management system is introduced, enabling users to create, edit, and store local prompts locally. New dependencies support YAML parsing and syntax highlighting. The store is restructured to distinguish server and local prompts, with client-side rendering via Nunjucks. New UI components provide prompt creation, editing, and management interfaces integrated into chat header, input, and settings sections.

Changes

Cohort / File(s) Summary
Dependencies
package.json
Added @types/js-yaml, js-yaml, nunjucks, and react-syntax-highlighter as new runtime and type dependencies.
Prompt Schema & Utilities
src/chat/promptSchema.js, src/chat/chatUtils.js
New promptSchema.js exports YAML validation (validatePromptYAML, validatePromptName), conversion (promptToYAML), extraction (extractYAMLFromText), and template generation utilities. Updated chatUtils.js raises validateMessage character limit from 10k to 100k.
Store & State Management
src/chat/store/promptsSlice.js, src/chat/store/useChatStore.js
promptsSlice now separates serverPrompts and localPrompts, adds getPrompts() merger, exposes local-prompt lifecycle methods (saveLocalPrompt, updateLocalPrompt, deleteLocalPrompt, getLocalPrompt), adds client-side rendering via Nunjucks, and metadata timestamping. useChatStore selector updated to expose new local-prompt actions and merged prompts.
New Prompt UI Components
src/chat/YamlEditor.js, src/chat/CreatePromptDialog.js, src/chat/PromptEditor.js, src/chat/PromptManagerModal.js
Introduces four new components: YamlEditor for YAML editing with syntax highlighting, CreatePromptDialog for AI-assisted prompt generation, PromptEditor for dual-view (YAML/form) prompt editing with version history, and PromptManagerModal for comprehensive prompt management (create/edit/delete/import/export).
Existing Component Integrations
src/chat/ChatHeader.js, src/chat/ChatInput.js, src/chat/ChatSettings.js, src/chat/SlashCommandModal.js, src/chat/SlashCommandSelector.js
ChatHeader and ChatSettings integrate PromptManagerModal via useState and usePrompts hook. ChatInput adds CreatePromptDialog and PromptEditor for AI-assisted prompt creation. SlashCommandModal and SlashCommandSelector render local-prompt indicator (Chip with ComputerIcon) when prompt.source === 'local'.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ChatInput
    participant CreatePromptDialog
    participant OneShotChatModal
    participant PromptEditor
    participant Store

    User->>ChatInput: Click "Create Prompt"
    ChatInput->>CreatePromptDialog: open=true
    CreatePromptDialog->>CreatePromptDialog: Prefill with chat history
    User->>CreatePromptDialog: Enter description + click "Generate with AI"
    CreatePromptDialog->>OneShotChatModal: buildAIPrompt (YAML-focused)
    OneShotChatModal->>OneShotChatModal: Generate YAML via AI
    CreatePromptDialog->>ChatInput: onYAMLGenerated(yaml)
    ChatInput->>PromptEditor: open=true, initialYaml
    PromptEditor->>PromptEditor: Parse YAML → form state
    User->>PromptEditor: Edit (YAML or form view) + Save
    PromptEditor->>Store: saveLocalPrompt(promptData)
    Store->>Store: Validate name, add metadata
    Store-->>PromptEditor: success
    PromptEditor->>ChatInput: onClose()
    Note over User,ChatInput: Prompt saved locally
Loading
sequenceDiagram
    participant User
    participant ChatSettings
    participant PromptManagerModal
    participant PromptEditor
    participant Store

    User->>ChatSettings: Click "Custom Prompts"
    ChatSettings->>PromptManagerModal: open=true
    PromptManagerModal->>Store: getPrompts()
    Store-->>PromptManagerModal: [localPrompts, serverPrompts]
    User->>PromptManagerModal: Select or search prompt
    alt Create New
        User->>PromptManagerModal: Click "Create"
        PromptManagerModal->>CreatePromptDialog: open (AI generation flow)
    else Edit Existing
        User->>PromptManagerModal: Click "Edit"
        PromptManagerModal->>PromptEditor: open, initializeFromPrompt()
    end
    PromptEditor->>PromptEditor: Render dual-view + version history
    User->>PromptEditor: Edit + Save
    PromptEditor->>Store: updateLocalPrompt or saveLocalPrompt
    Store->>Store: Validate, timestamp, persist
    PromptEditor->>PromptManagerModal: onClose()
    Note over User,Store: Local prompts persisted
Loading
sequenceDiagram
    participant ChatSystem
    participant Store
    participant RenderEngine
    participant Nunjucks as Nunjucks<br/>(Client)
    participant Server

    ChatSystem->>Store: getPrompts()
    Store->>Store: Merge serverPrompts + localPrompts
    Store-->>ChatSystem: [merged prompts with source tag]
    ChatSystem->>RenderEngine: renderPrompt(prompt)
    alt Local Prompt (source='local')
        RenderEngine->>Nunjucks: Template + variables
        Nunjucks-->>RenderEngine: Rendered output
    else Server Prompt (source='server')
        RenderEngine->>Server: Send to backend rendering
        Server-->>RenderEngine: Rendered output
    end
    RenderEngine-->>ChatSystem: Final rendered content
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Areas requiring extra attention:

  • PromptEditor.js — Substantial component with dual-view editing, YAML↔form synchronization, version history, and AI refinement workflow; multiple state variables and validation logic paths
  • promptsSlice.js — Significant store refactoring separating server/local prompts; new client-side rendering via Nunjucks; validation and naming-conflict logic require careful examination
  • PromptManagerModal.js — Complex modal with search, CRUD operations, import/export, and nested editor/dialog integrations; interaction flow verification needed
  • promptSchema.js — New utility module with YAML parsing, validation, and extraction logic; edge cases in YAML error handling and field validation warrant scrutiny
  • Integration patterns — Verify state-lifting and callback chains across ChatHeader, ChatInput, ChatSettings, and new prompt components; hook usage (usePrompts) consistency
  • Store selectors in useChatStore.js — Ensure persistence configuration includes localPrompts and new actions are correctly mapped

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Excessive Css In React Should Use Styles ⚠️ Warning Multiple new React components (YamlEditor.js, PromptManagerModal.js, PromptEditor.js, CreatePromptDialog.js) use inline CSS style objects with 3-5 properties each without the makeStyles pattern standard in this codebase. Extract complex inline styles into makeStyles hooks following the codebase pattern. Reserve inline styles only for simple dynamic values depending on props or state. Apply consistently across all new components.
Single Responsibility And Clear Naming ⚠️ Warning PromptManagerModal (806 lines) violates Single Responsibility with 5+ distinct concerns: prompt filtering, selection, creation, deletion, import/export, and managing nested dialogs. PromptEditor (1100+ lines) has 7+ responsibilities spanning dual-view editing, validation, version history, and AI refinement. promptSchema.js mixes unrelated concerns: YAML validation/serialization, text extraction, templates, and name validation. Decompose PromptManagerModal and PromptEditor into focused sub-components. Split promptSchema.js into specific modules (yamlValidator.js, yamlParser.js) or rename to promptUtilities.js. Ensure component names are specific and action-oriented, not generic.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title '[wip] Add custom prompt management feature' clearly describes the main change of introducing a custom prompt management system with local prompts, AI-assisted creation, and a prompt manager modal.
Go Error Handling ✅ Passed No Go files present in pull request; check not applicable to JavaScript/React changes only.
Sql Injection Prevention ✅ Passed Frontend React application with no SQL query construction, database connections, or SQL injection vectors detected.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 5, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (1)
sippy-ng/package.json (1)

21-21: Consider adding type definitions for better TypeScript support.

Type definitions are missing for nunjucks and react-syntax-highlighter. Consider adding:

  • @types/react-syntax-highlighter
  • @types/nunjucks (if available)

This will improve IDE support and catch type-related issues during development.

Apply this diff to add the missing type definitions:

     "@types/js-yaml": "^4.0.9",
+    "@types/nunjucks": "^3.2.6",
+    "@types/react-syntax-highlighter": "^15.5.13",
     "chart.js": "^3.5.1",

Also applies to: 32-33, 46-46

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between aa5f04f and 10ff77f.

⛔ Files ignored due to path filters (1)
  • sippy-ng/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (14)
  • sippy-ng/package.json (3 hunks)
  • sippy-ng/src/chat/ChatHeader.js (5 hunks)
  • sippy-ng/src/chat/ChatInput.js (7 hunks)
  • sippy-ng/src/chat/ChatSettings.js (6 hunks)
  • sippy-ng/src/chat/CreatePromptDialog.js (1 hunks)
  • sippy-ng/src/chat/PromptEditor.js (1 hunks)
  • sippy-ng/src/chat/PromptManagerModal.js (1 hunks)
  • sippy-ng/src/chat/SlashCommandModal.js (4 hunks)
  • sippy-ng/src/chat/SlashCommandSelector.js (3 hunks)
  • sippy-ng/src/chat/YamlEditor.js (1 hunks)
  • sippy-ng/src/chat/chatUtils.js (1 hunks)
  • sippy-ng/src/chat/promptSchema.js (1 hunks)
  • sippy-ng/src/chat/store/promptsSlice.js (4 hunks)
  • sippy-ng/src/chat/store/useChatStore.js (2 hunks)
🔇 Additional comments (5)
sippy-ng/src/chat/SlashCommandSelector.js (1)

1-145: LGTM!

The local prompt indicator is well-implemented with:

  • Clean flexbox layout for text and chip alignment
  • Appropriate visual styling (outlined, small, secondary)
  • Conditional rendering based on prompt source
sippy-ng/src/chat/SlashCommandModal.js (1)

360-373: LGTM!

The local prompt indicator in the modal title is consistent with the implementation in SlashCommandSelector.

sippy-ng/src/chat/ChatSettings.js (1)

485-566: LGTM!

The custom prompts management section is well-integrated with:

  • Clear UI displaying local prompt count
  • Consistent styling with other sections
  • Proper state management for the modal
sippy-ng/src/chat/ChatHeader.js (1)

164-214: LGTM!

The prompt manager integration is well-implemented with:

  • Clear button with dynamic count in tooltip
  • Consistent pattern with ChatSettings
  • Proper modal state management
sippy-ng/src/chat/store/useChatStore.js (1)

162-177: getPrompts() implementation verified and working correctly.

The getPrompts() method is properly implemented in sippy-ng/src/chat/store/promptsSlice.js (lines 18-28). It correctly merges server and local prompts by mapping each with appropriate source attribution and returns a sorted combined array. The hook correctly calls this method at line 165 in useChatStore.js.

Comment on lines +347 to 349
if (length > 90000) return 'error'
if (length > 80000) return 'warning'
return ''
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix character limit display inconsistency.

The validation logic checks for 90,000/80,000 character limits, but the UI displays a maximum of 10,000. According to the PR description, the message limit was increased to 100K, so the display should reflect the actual limits.

Apply this diff to fix the display:

           className={`${classes.characterCount} ${getCharacterCountClass()}`}
         >
-          {message.length}/10000
+          {message.length}/100000
         </span>

Also applies to: 421-421

🤖 Prompt for AI Agents
In sippy-ng/src/chat/ChatInput.js around lines 347-349 (and also at line 421),
the validation uses 90,000/80,000 thresholds but the UI displays a 10,000 max;
update the UI display to reflect the actual 100,000 message limit and matching
warning/error thresholds. Replace hard-coded display values with constants
(e.g., MAX_MESSAGE_LEN = 100000, WARNING_THRESHOLD = 80000, ERROR_THRESHOLD =
90000) or update the displayed numbers to 100000/80000/90000, and ensure both
locations use the same constants so validation and display remain consistent.

Comment on lines +128 to +131
if (content.length > 100000) {
return {
valid: false,
error: 'Message is too long (max 10,000 characters)',
error: 'Message is too long (max 100,000 characters)',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify backend and WebSocket compatibility with the increased limit.

The change is consistent—both the validation threshold and error message are updated together. However, ensure the following can handle 100KB messages:

  • Backend API validation matches this new limit
  • WebSocket message size limits are sufficient
  • IndexedDB storage handles larger messages without performance degradation
  • Server-side rate limiting or payload constraints won't reject these messages

Run the following script to check for any hardcoded message size limits in the backend:


🏁 Script executed:

#!/bin/bash
# Description: Search for message size validation or limits in backend code

# Search for numeric limits around 10000 or 100000
rg -n -C3 '\b(10000|100000|10_000|100_000)\b' --type go --type python --type java

# Search for message validation patterns
rg -n -C3 'message.*length|length.*message|max.*characters' --type go --type python --type java

Length of output: 61


Let me generate a corrected script to search the codebase more broadly:


🏁 Script executed:

#!/bin/bash
# Description: Search for message size validation or limits - corrected search

# Search for numeric limits around 10000 or 100000 in all files
rg -n -C3 '\b(10000|100000|10_000|100_000)\b' --type ts --type js --type go

# Search for message validation patterns in all files
rg -n -C3 'message.*length|length.*message|max.*characters' --type ts --type js --type go

# Search for message size limits in config or env files
rg -n 'message.*size|size.*limit|MAX.*MESSAGE' -i

# Look for backend server/API files that might validate messages
fd -e go -e ts -e js -i -x grep -l "message\|validation" {}

# Check for any package.json or config files with message size settings
fd -e json -e yaml -e yml -e env -x grep -l "message.*size\|limit" {}

Length of output: 262875


🏁 Script executed:

#!/bin/bash
# Search for all message limit references in chat files

echo "=== Checking for message length limit references ==="
rg -n "10000|100000" sippy-ng/src/chat/ --type js

echo -e "\n=== Checking for character count thresholds ==="
rg -n "90000|80000" sippy-ng/src/chat/ --type js

echo -e "\n=== Searching for other size limit references ==="
rg -n "max.*message|message.*max" sippy-ng/src/chat/ --type js -i | head -20

Length of output: 625


Update UI character counter to match new 100K limit

The backend validation was increased to 100,000 characters (line 128 of chatUtils.js), but the character counter display in ChatInput.js still shows the old 10,000 limit. This creates a confusing UX where:

  • Users can actually send up to 100,000 characters
  • But the UI counter displays {message.length}/10000
  • Warning thresholds trigger at 80K/90K when UI claims max is 10K

Required fixes in sippy-ng/src/chat/ChatInput.js:

  • Line 421: Change {message.length}/10000 to {message.length}/100000
  • Lines 347-348: Verify warning color thresholds (currently 90,000 and 80,000) are appropriate for the new 100K limit, or adjust if needed
🤖 Prompt for AI Agents
In sippy-ng/src/chat/ChatInput.js around lines 347-348 and 421, the UI character
counter still shows the old 10,000 limit while backend validation now allows
100,000; update the display at line 421 from showing "/10000" to "/100000" and
verify/adjust the warning threshold logic at lines 347-348 so the warning colors
trigger at appropriate values for a 100,000-char limit (e.g., 80,000 and 90,000)
instead of thresholds calibrated to 10,000.

Comment on lines +62 to +75
const handleClose = () => {
setPromptDescription('')
setIncludeHistory(false)
onClose()
}

const handleAIResult = (result) => {
console.log('=== AI Generated Prompt Response ===')
console.log('Full response length:', result.length)

// Pass the generated YAML to parent (should be raw YAML)
onYAMLGenerated(result)
handleClose()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Close the AI modal when finishing generation.

Line 62: Without resetting aiModalOpen, the OneShotChatModal stays open even after the parent dialog closes, leaving users with a stale overlay that obscures the app. Add setAiModalOpen(false) to handleClose so both cancel and completion paths properly tear down the modal before returning control.

 const handleClose = () => {
-    setPromptDescription('')
-    setIncludeHistory(false)
-    onClose()
+    setAiModalOpen(false)
+    setPromptDescription('')
+    setIncludeHistory(false)
+    onClose()
   }
🤖 Prompt for AI Agents
In sippy-ng/src/chat/CreatePromptDialog.js around lines 62 to 75, the
handleClose function does not reset the aiModalOpen state so the
OneShotChatModal remains visible after the parent dialog closes; update
handleClose to call setAiModalOpen(false) (in addition to clearing
promptDescription and includeHistory and calling onClose) so both cancel and
completion flows properly tear down the AI modal before returning control.

Comment on lines +185 to +198
// Reset on close
setYamlContent('')
setFormData({
name: '',
description: '',
hide: false,
arguments: [],
prompt: '',
})
setVersions([])
setValidationErrors([])
setSaveError(null)
setAiRefinementPrompt('')
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reset the refinement modal when the editor closes.

Line 195: When open flips to false, the cleanup branch resets most editor state but leaves aiModalOpen untouched. If the refinement modal was active, it remains visible even though the editor dialog disappeared. Reset aiModalOpen to false alongside the other cleanup state.

       setValidationErrors([])
       setSaveError(null)
-      setAiRefinementPrompt('')
+      setAiRefinementPrompt('')
+      setAiModalOpen(false)
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Reset on close
setYamlContent('')
setFormData({
name: '',
description: '',
hide: false,
arguments: [],
prompt: '',
})
setVersions([])
setValidationErrors([])
setSaveError(null)
setAiRefinementPrompt('')
}
// Reset on close
setYamlContent('')
setFormData({
name: '',
description: '',
hide: false,
arguments: [],
prompt: '',
})
setVersions([])
setValidationErrors([])
setSaveError(null)
setAiRefinementPrompt('')
setAiModalOpen(false)
}
🤖 Prompt for AI Agents
In sippy-ng/src/chat/PromptEditor.js around lines 185 to 198, the cleanup branch
that resets editor state when the dialog closes does not reset the refinement
modal flag; add a call to setAiModalOpen(false) alongside the other state resets
so any open refinement modal is closed when the editor closes.

Comment on lines +324 to +348
const handleAIRefinementResult = (result) => {
// Extract YAML from the result
const yamlBlockRegex = /```(?:yaml|yml)?\s*\n([\s\S]*?)```/i
const match = result.match(yamlBlockRegex)

if (match) {
const newYAML = match[1].trim()
// Add to version history
setVersions((prev) => [
{ yaml: newYAML, timestamp: new Date().toISOString() },
...prev.slice(0, 9), // Keep max 10 versions
])
setYamlContent(newYAML)
parseYAMLToForm(newYAML)
} else {
// If no YAML block found, try to use the whole result
setVersions((prev) => [
{ yaml: result, timestamp: new Date().toISOString() },
...prev.slice(0, 9),
])
setYamlContent(result)
parseYAMLToForm(result)
}
setAiRefinementPrompt('')
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Dismiss the refinement modal after applying AI output.

Line 341: handleAIRefinementResult updates the YAML but never flips aiModalOpen back to false, so the refinement modal lingers after every run and the user has to close it manually. Call setAiModalOpen(false) after processing the response.

   } else {
     // If no YAML block found, try to use the whole result
     setVersions((prev) => [
       { yaml: result, timestamp: new Date().toISOString() },
       ...prev.slice(0, 9),
     ])
     setYamlContent(result)
     parseYAMLToForm(result)
   }
-  setAiRefinementPrompt('')
+  setAiRefinementPrompt('')
+  setAiModalOpen(false)
🤖 Prompt for AI Agents
In sippy-ng/src/chat/PromptEditor.js around lines 324 to 348,
handleAIRefinementResult updates YAML and UI state but never closes the AI
refinement modal; after you finish applying the result (after updating versions,
yamlContent, parseYAMLToForm and clearing ai refinement prompt) call
setAiModalOpen(false) to dismiss the modal automatically so it doesn't linger
for the user.

Comment on lines +220 to +237
const input = document.createElement('input')
input.type = 'file'
input.accept = '.yaml,.yml'
input.onchange = (e) => {
const file = e.target.files[0]
if (file) {
const reader = new FileReader()
reader.onload = (event) => {
try {
const yamlContent = event.target.result
// Split by document separator if multiple prompts
const prompts = yamlContent.split(/\n---\n/)
prompts.forEach((promptYAML) => {
if (promptYAML.trim()) {
setEditorInitialYAML(promptYAML.trim())
setEditorPromptName(null)
setEditorOpen(true)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle multiple imported prompts instead of dropping them.

Line 231: The forEach inside the file reader keeps calling setEditorInitialYAML(...) and setEditorOpen(true) for every YAML document, but React batches those state updates so only the last document survives. Any earlier prompts in the file are silently discarded, breaking bulk-import. Please restructure the import flow so each document is surfaced (e.g., queue them and open them one by one, or materialize them immediately via saveLocalPrompt) instead of overwriting state inside the loop.

🤖 Prompt for AI Agents
In sippy-ng/src/chat/PromptManagerModal.js around lines 220-237, the file-reader
loop calls setEditorInitialYAML and setEditorOpen for each YAML document which
gets batched so only the last document is kept; instead, for each non-empty
document either immediately persist it (e.g., call the existing saveLocalPrompt
handler with the parsed YAML and a generated/parsed name) or enqueue all parsed
prompts into component state and then open them one-by-one (set the queue state
and open the editor for the first item), ensuring you do not repeatedly call
setEditorInitialYAML inside the forEach; implement one of these two flows so
every imported document is saved or queued and surfaced rather than overwritten.

Comment on lines +97 to +117
return (
<Box className={classes.editorContainer}>
<div className={classes.textFieldContainer}>
<TextField
multiline
value={value}
onChange={(e) => onChange(e.target.value)}
placeholder={placeholder}
error={!!error}
helperText={error}
fullWidth
variant="outlined"
InputProps={{
style: {
fontFamily: 'Consolas, Monaco, "Courier New", monospace',
},
}}
/>
</div>
</Box>
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add safety check for optional onChange prop.

The onChange prop is optional in PropTypes but is called unconditionally in edit mode. This could cause a runtime error if the component is used without providing onChange.

Apply this diff to add a safety check:

   return (
     <Box className={classes.editorContainer}>
       <div className={classes.textFieldContainer}>
         <TextField
           multiline
           value={value}
-          onChange={(e) => onChange(e.target.value)}
+          onChange={(e) => onChange?.(e.target.value)}
           placeholder={placeholder}
           error={!!error}
           helperText={error}
           fullWidth
           variant="outlined"
           InputProps={{
             style: {
               fontFamily: 'Consolas, Monaco, "Courier New", monospace',
             },
           }}
         />
       </div>
     </Box>
   )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return (
<Box className={classes.editorContainer}>
<div className={classes.textFieldContainer}>
<TextField
multiline
value={value}
onChange={(e) => onChange(e.target.value)}
placeholder={placeholder}
error={!!error}
helperText={error}
fullWidth
variant="outlined"
InputProps={{
style: {
fontFamily: 'Consolas, Monaco, "Courier New", monospace',
},
}}
/>
</div>
</Box>
)
return (
<Box className={classes.editorContainer}>
<div className={classes.textFieldContainer}>
<TextField
multiline
value={value}
onChange={(e) => onChange?.(e.target.value)}
placeholder={placeholder}
error={!!error}
helperText={error}
fullWidth
variant="outlined"
InputProps={{
style: {
fontFamily: 'Consolas, Monaco, "Courier New", monospace',
},
}}
/>
</div>
</Box>
)
🤖 Prompt for AI Agents
In sippy-ng/src/chat/YamlEditor.js around lines 97 to 117, the component calls
onChange unconditionally which can throw if the optional prop is not provided;
update the onChange handler to guard the call (e.g., check if typeof onChange
=== 'function' or use onChange && onChange(...)) or provide a default no-op prop
so the component only invokes onChange when it exists.

@openshift-ci-robot
Copy link

Scheduling required tests:
/test e2e

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 5, 2025

@stbenjam: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants