-
Notifications
You must be signed in to change notification settings - Fork 96
[wip] Add custom prompt management feature #3088
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Browser-persisted local prompts stored in IndexedDB - AI-assisted prompt creation with optional chat history context - Dual-view editor (YAML and Form modes) with version history - AI refinement capability for iterating on prompts - Dedicated Prompt Manager modal with list/detail layout - Visual indicators to distinguish local vs server prompts - Client-side template rendering using Nunjucks - Export/import functionality for sharing prompts - Increased message character limit to 100K
|
Pipeline controller notification For optional jobs, comment |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: stbenjam The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
WalkthroughA custom prompt management system is introduced, enabling users to create, edit, and store local prompts locally. New dependencies support YAML parsing and syntax highlighting. The store is restructured to distinguish server and local prompts, with client-side rendering via Nunjucks. New UI components provide prompt creation, editing, and management interfaces integrated into chat header, input, and settings sections. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant ChatInput
participant CreatePromptDialog
participant OneShotChatModal
participant PromptEditor
participant Store
User->>ChatInput: Click "Create Prompt"
ChatInput->>CreatePromptDialog: open=true
CreatePromptDialog->>CreatePromptDialog: Prefill with chat history
User->>CreatePromptDialog: Enter description + click "Generate with AI"
CreatePromptDialog->>OneShotChatModal: buildAIPrompt (YAML-focused)
OneShotChatModal->>OneShotChatModal: Generate YAML via AI
CreatePromptDialog->>ChatInput: onYAMLGenerated(yaml)
ChatInput->>PromptEditor: open=true, initialYaml
PromptEditor->>PromptEditor: Parse YAML → form state
User->>PromptEditor: Edit (YAML or form view) + Save
PromptEditor->>Store: saveLocalPrompt(promptData)
Store->>Store: Validate name, add metadata
Store-->>PromptEditor: success
PromptEditor->>ChatInput: onClose()
Note over User,ChatInput: Prompt saved locally
sequenceDiagram
participant User
participant ChatSettings
participant PromptManagerModal
participant PromptEditor
participant Store
User->>ChatSettings: Click "Custom Prompts"
ChatSettings->>PromptManagerModal: open=true
PromptManagerModal->>Store: getPrompts()
Store-->>PromptManagerModal: [localPrompts, serverPrompts]
User->>PromptManagerModal: Select or search prompt
alt Create New
User->>PromptManagerModal: Click "Create"
PromptManagerModal->>CreatePromptDialog: open (AI generation flow)
else Edit Existing
User->>PromptManagerModal: Click "Edit"
PromptManagerModal->>PromptEditor: open, initializeFromPrompt()
end
PromptEditor->>PromptEditor: Render dual-view + version history
User->>PromptEditor: Edit + Save
PromptEditor->>Store: updateLocalPrompt or saveLocalPrompt
Store->>Store: Validate, timestamp, persist
PromptEditor->>PromptManagerModal: onClose()
Note over User,Store: Local prompts persisted
sequenceDiagram
participant ChatSystem
participant Store
participant RenderEngine
participant Nunjucks as Nunjucks<br/>(Client)
participant Server
ChatSystem->>Store: getPrompts()
Store->>Store: Merge serverPrompts + localPrompts
Store-->>ChatSystem: [merged prompts with source tag]
ChatSystem->>RenderEngine: renderPrompt(prompt)
alt Local Prompt (source='local')
RenderEngine->>Nunjucks: Template + variables
Nunjucks-->>RenderEngine: Rendered output
else Server Prompt (source='server')
RenderEngine->>Server: Send to backend rendering
Server-->>RenderEngine: Rendered output
end
RenderEngine-->>ChatSystem: Final rendered content
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Areas requiring extra attention:
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Nitpick comments (1)
sippy-ng/package.json (1)
21-21: Consider adding type definitions for better TypeScript support.Type definitions are missing for
nunjucksandreact-syntax-highlighter. Consider adding:
@types/react-syntax-highlighter@types/nunjucks(if available)This will improve IDE support and catch type-related issues during development.
Apply this diff to add the missing type definitions:
"@types/js-yaml": "^4.0.9", + "@types/nunjucks": "^3.2.6", + "@types/react-syntax-highlighter": "^15.5.13", "chart.js": "^3.5.1",Also applies to: 32-33, 46-46
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
⛔ Files ignored due to path filters (1)
sippy-ng/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (14)
sippy-ng/package.json(3 hunks)sippy-ng/src/chat/ChatHeader.js(5 hunks)sippy-ng/src/chat/ChatInput.js(7 hunks)sippy-ng/src/chat/ChatSettings.js(6 hunks)sippy-ng/src/chat/CreatePromptDialog.js(1 hunks)sippy-ng/src/chat/PromptEditor.js(1 hunks)sippy-ng/src/chat/PromptManagerModal.js(1 hunks)sippy-ng/src/chat/SlashCommandModal.js(4 hunks)sippy-ng/src/chat/SlashCommandSelector.js(3 hunks)sippy-ng/src/chat/YamlEditor.js(1 hunks)sippy-ng/src/chat/chatUtils.js(1 hunks)sippy-ng/src/chat/promptSchema.js(1 hunks)sippy-ng/src/chat/store/promptsSlice.js(4 hunks)sippy-ng/src/chat/store/useChatStore.js(2 hunks)
🔇 Additional comments (5)
sippy-ng/src/chat/SlashCommandSelector.js (1)
1-145: LGTM!The local prompt indicator is well-implemented with:
- Clean flexbox layout for text and chip alignment
- Appropriate visual styling (outlined, small, secondary)
- Conditional rendering based on prompt source
sippy-ng/src/chat/SlashCommandModal.js (1)
360-373: LGTM!The local prompt indicator in the modal title is consistent with the implementation in SlashCommandSelector.
sippy-ng/src/chat/ChatSettings.js (1)
485-566: LGTM!The custom prompts management section is well-integrated with:
- Clear UI displaying local prompt count
- Consistent styling with other sections
- Proper state management for the modal
sippy-ng/src/chat/ChatHeader.js (1)
164-214: LGTM!The prompt manager integration is well-implemented with:
- Clear button with dynamic count in tooltip
- Consistent pattern with ChatSettings
- Proper modal state management
sippy-ng/src/chat/store/useChatStore.js (1)
162-177: getPrompts() implementation verified and working correctly.The
getPrompts()method is properly implemented insippy-ng/src/chat/store/promptsSlice.js(lines 18-28). It correctly merges server and local prompts by mapping each with appropriate source attribution and returns a sorted combined array. The hook correctly calls this method at line 165 in useChatStore.js.
| if (length > 90000) return 'error' | ||
| if (length > 80000) return 'warning' | ||
| return '' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix character limit display inconsistency.
The validation logic checks for 90,000/80,000 character limits, but the UI displays a maximum of 10,000. According to the PR description, the message limit was increased to 100K, so the display should reflect the actual limits.
Apply this diff to fix the display:
className={`${classes.characterCount} ${getCharacterCountClass()}`}
>
- {message.length}/10000
+ {message.length}/100000
</span>Also applies to: 421-421
🤖 Prompt for AI Agents
In sippy-ng/src/chat/ChatInput.js around lines 347-349 (and also at line 421),
the validation uses 90,000/80,000 thresholds but the UI displays a 10,000 max;
update the UI display to reflect the actual 100,000 message limit and matching
warning/error thresholds. Replace hard-coded display values with constants
(e.g., MAX_MESSAGE_LEN = 100000, WARNING_THRESHOLD = 80000, ERROR_THRESHOLD =
90000) or update the displayed numbers to 100000/80000/90000, and ensure both
locations use the same constants so validation and display remain consistent.
| if (content.length > 100000) { | ||
| return { | ||
| valid: false, | ||
| error: 'Message is too long (max 10,000 characters)', | ||
| error: 'Message is too long (max 100,000 characters)', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify backend and WebSocket compatibility with the increased limit.
The change is consistent—both the validation threshold and error message are updated together. However, ensure the following can handle 100KB messages:
- Backend API validation matches this new limit
- WebSocket message size limits are sufficient
- IndexedDB storage handles larger messages without performance degradation
- Server-side rate limiting or payload constraints won't reject these messages
Run the following script to check for any hardcoded message size limits in the backend:
🏁 Script executed:
#!/bin/bash
# Description: Search for message size validation or limits in backend code
# Search for numeric limits around 10000 or 100000
rg -n -C3 '\b(10000|100000|10_000|100_000)\b' --type go --type python --type java
# Search for message validation patterns
rg -n -C3 'message.*length|length.*message|max.*characters' --type go --type python --type javaLength of output: 61
Let me generate a corrected script to search the codebase more broadly:
🏁 Script executed:
#!/bin/bash
# Description: Search for message size validation or limits - corrected search
# Search for numeric limits around 10000 or 100000 in all files
rg -n -C3 '\b(10000|100000|10_000|100_000)\b' --type ts --type js --type go
# Search for message validation patterns in all files
rg -n -C3 'message.*length|length.*message|max.*characters' --type ts --type js --type go
# Search for message size limits in config or env files
rg -n 'message.*size|size.*limit|MAX.*MESSAGE' -i
# Look for backend server/API files that might validate messages
fd -e go -e ts -e js -i -x grep -l "message\|validation" {}
# Check for any package.json or config files with message size settings
fd -e json -e yaml -e yml -e env -x grep -l "message.*size\|limit" {}Length of output: 262875
🏁 Script executed:
#!/bin/bash
# Search for all message limit references in chat files
echo "=== Checking for message length limit references ==="
rg -n "10000|100000" sippy-ng/src/chat/ --type js
echo -e "\n=== Checking for character count thresholds ==="
rg -n "90000|80000" sippy-ng/src/chat/ --type js
echo -e "\n=== Searching for other size limit references ==="
rg -n "max.*message|message.*max" sippy-ng/src/chat/ --type js -i | head -20Length of output: 625
Update UI character counter to match new 100K limit
The backend validation was increased to 100,000 characters (line 128 of chatUtils.js), but the character counter display in ChatInput.js still shows the old 10,000 limit. This creates a confusing UX where:
- Users can actually send up to 100,000 characters
- But the UI counter displays
{message.length}/10000 - Warning thresholds trigger at 80K/90K when UI claims max is 10K
Required fixes in sippy-ng/src/chat/ChatInput.js:
- Line 421: Change
{message.length}/10000to{message.length}/100000 - Lines 347-348: Verify warning color thresholds (currently 90,000 and 80,000) are appropriate for the new 100K limit, or adjust if needed
🤖 Prompt for AI Agents
In sippy-ng/src/chat/ChatInput.js around lines 347-348 and 421, the UI character
counter still shows the old 10,000 limit while backend validation now allows
100,000; update the display at line 421 from showing "/10000" to "/100000" and
verify/adjust the warning threshold logic at lines 347-348 so the warning colors
trigger at appropriate values for a 100,000-char limit (e.g., 80,000 and 90,000)
instead of thresholds calibrated to 10,000.
| const handleClose = () => { | ||
| setPromptDescription('') | ||
| setIncludeHistory(false) | ||
| onClose() | ||
| } | ||
|
|
||
| const handleAIResult = (result) => { | ||
| console.log('=== AI Generated Prompt Response ===') | ||
| console.log('Full response length:', result.length) | ||
|
|
||
| // Pass the generated YAML to parent (should be raw YAML) | ||
| onYAMLGenerated(result) | ||
| handleClose() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Close the AI modal when finishing generation.
Line 62: Without resetting aiModalOpen, the OneShotChatModal stays open even after the parent dialog closes, leaving users with a stale overlay that obscures the app. Add setAiModalOpen(false) to handleClose so both cancel and completion paths properly tear down the modal before returning control.
const handleClose = () => {
- setPromptDescription('')
- setIncludeHistory(false)
- onClose()
+ setAiModalOpen(false)
+ setPromptDescription('')
+ setIncludeHistory(false)
+ onClose()
}🤖 Prompt for AI Agents
In sippy-ng/src/chat/CreatePromptDialog.js around lines 62 to 75, the
handleClose function does not reset the aiModalOpen state so the
OneShotChatModal remains visible after the parent dialog closes; update
handleClose to call setAiModalOpen(false) (in addition to clearing
promptDescription and includeHistory and calling onClose) so both cancel and
completion flows properly tear down the AI modal before returning control.
| // Reset on close | ||
| setYamlContent('') | ||
| setFormData({ | ||
| name: '', | ||
| description: '', | ||
| hide: false, | ||
| arguments: [], | ||
| prompt: '', | ||
| }) | ||
| setVersions([]) | ||
| setValidationErrors([]) | ||
| setSaveError(null) | ||
| setAiRefinementPrompt('') | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reset the refinement modal when the editor closes.
Line 195: When open flips to false, the cleanup branch resets most editor state but leaves aiModalOpen untouched. If the refinement modal was active, it remains visible even though the editor dialog disappeared. Reset aiModalOpen to false alongside the other cleanup state.
setValidationErrors([])
setSaveError(null)
- setAiRefinementPrompt('')
+ setAiRefinementPrompt('')
+ setAiModalOpen(false)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Reset on close | |
| setYamlContent('') | |
| setFormData({ | |
| name: '', | |
| description: '', | |
| hide: false, | |
| arguments: [], | |
| prompt: '', | |
| }) | |
| setVersions([]) | |
| setValidationErrors([]) | |
| setSaveError(null) | |
| setAiRefinementPrompt('') | |
| } | |
| // Reset on close | |
| setYamlContent('') | |
| setFormData({ | |
| name: '', | |
| description: '', | |
| hide: false, | |
| arguments: [], | |
| prompt: '', | |
| }) | |
| setVersions([]) | |
| setValidationErrors([]) | |
| setSaveError(null) | |
| setAiRefinementPrompt('') | |
| setAiModalOpen(false) | |
| } |
🤖 Prompt for AI Agents
In sippy-ng/src/chat/PromptEditor.js around lines 185 to 198, the cleanup branch
that resets editor state when the dialog closes does not reset the refinement
modal flag; add a call to setAiModalOpen(false) alongside the other state resets
so any open refinement modal is closed when the editor closes.
| const handleAIRefinementResult = (result) => { | ||
| // Extract YAML from the result | ||
| const yamlBlockRegex = /```(?:yaml|yml)?\s*\n([\s\S]*?)```/i | ||
| const match = result.match(yamlBlockRegex) | ||
|
|
||
| if (match) { | ||
| const newYAML = match[1].trim() | ||
| // Add to version history | ||
| setVersions((prev) => [ | ||
| { yaml: newYAML, timestamp: new Date().toISOString() }, | ||
| ...prev.slice(0, 9), // Keep max 10 versions | ||
| ]) | ||
| setYamlContent(newYAML) | ||
| parseYAMLToForm(newYAML) | ||
| } else { | ||
| // If no YAML block found, try to use the whole result | ||
| setVersions((prev) => [ | ||
| { yaml: result, timestamp: new Date().toISOString() }, | ||
| ...prev.slice(0, 9), | ||
| ]) | ||
| setYamlContent(result) | ||
| parseYAMLToForm(result) | ||
| } | ||
| setAiRefinementPrompt('') | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dismiss the refinement modal after applying AI output.
Line 341: handleAIRefinementResult updates the YAML but never flips aiModalOpen back to false, so the refinement modal lingers after every run and the user has to close it manually. Call setAiModalOpen(false) after processing the response.
} else {
// If no YAML block found, try to use the whole result
setVersions((prev) => [
{ yaml: result, timestamp: new Date().toISOString() },
...prev.slice(0, 9),
])
setYamlContent(result)
parseYAMLToForm(result)
}
- setAiRefinementPrompt('')
+ setAiRefinementPrompt('')
+ setAiModalOpen(false)🤖 Prompt for AI Agents
In sippy-ng/src/chat/PromptEditor.js around lines 324 to 348,
handleAIRefinementResult updates YAML and UI state but never closes the AI
refinement modal; after you finish applying the result (after updating versions,
yamlContent, parseYAMLToForm and clearing ai refinement prompt) call
setAiModalOpen(false) to dismiss the modal automatically so it doesn't linger
for the user.
| const input = document.createElement('input') | ||
| input.type = 'file' | ||
| input.accept = '.yaml,.yml' | ||
| input.onchange = (e) => { | ||
| const file = e.target.files[0] | ||
| if (file) { | ||
| const reader = new FileReader() | ||
| reader.onload = (event) => { | ||
| try { | ||
| const yamlContent = event.target.result | ||
| // Split by document separator if multiple prompts | ||
| const prompts = yamlContent.split(/\n---\n/) | ||
| prompts.forEach((promptYAML) => { | ||
| if (promptYAML.trim()) { | ||
| setEditorInitialYAML(promptYAML.trim()) | ||
| setEditorPromptName(null) | ||
| setEditorOpen(true) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle multiple imported prompts instead of dropping them.
Line 231: The forEach inside the file reader keeps calling setEditorInitialYAML(...) and setEditorOpen(true) for every YAML document, but React batches those state updates so only the last document survives. Any earlier prompts in the file are silently discarded, breaking bulk-import. Please restructure the import flow so each document is surfaced (e.g., queue them and open them one by one, or materialize them immediately via saveLocalPrompt) instead of overwriting state inside the loop.
🤖 Prompt for AI Agents
In sippy-ng/src/chat/PromptManagerModal.js around lines 220-237, the file-reader
loop calls setEditorInitialYAML and setEditorOpen for each YAML document which
gets batched so only the last document is kept; instead, for each non-empty
document either immediately persist it (e.g., call the existing saveLocalPrompt
handler with the parsed YAML and a generated/parsed name) or enqueue all parsed
prompts into component state and then open them one-by-one (set the queue state
and open the editor for the first item), ensuring you do not repeatedly call
setEditorInitialYAML inside the forEach; implement one of these two flows so
every imported document is saved or queued and surfaced rather than overwritten.
| return ( | ||
| <Box className={classes.editorContainer}> | ||
| <div className={classes.textFieldContainer}> | ||
| <TextField | ||
| multiline | ||
| value={value} | ||
| onChange={(e) => onChange(e.target.value)} | ||
| placeholder={placeholder} | ||
| error={!!error} | ||
| helperText={error} | ||
| fullWidth | ||
| variant="outlined" | ||
| InputProps={{ | ||
| style: { | ||
| fontFamily: 'Consolas, Monaco, "Courier New", monospace', | ||
| }, | ||
| }} | ||
| /> | ||
| </div> | ||
| </Box> | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add safety check for optional onChange prop.
The onChange prop is optional in PropTypes but is called unconditionally in edit mode. This could cause a runtime error if the component is used without providing onChange.
Apply this diff to add a safety check:
return (
<Box className={classes.editorContainer}>
<div className={classes.textFieldContainer}>
<TextField
multiline
value={value}
- onChange={(e) => onChange(e.target.value)}
+ onChange={(e) => onChange?.(e.target.value)}
placeholder={placeholder}
error={!!error}
helperText={error}
fullWidth
variant="outlined"
InputProps={{
style: {
fontFamily: 'Consolas, Monaco, "Courier New", monospace',
},
}}
/>
</div>
</Box>
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return ( | |
| <Box className={classes.editorContainer}> | |
| <div className={classes.textFieldContainer}> | |
| <TextField | |
| multiline | |
| value={value} | |
| onChange={(e) => onChange(e.target.value)} | |
| placeholder={placeholder} | |
| error={!!error} | |
| helperText={error} | |
| fullWidth | |
| variant="outlined" | |
| InputProps={{ | |
| style: { | |
| fontFamily: 'Consolas, Monaco, "Courier New", monospace', | |
| }, | |
| }} | |
| /> | |
| </div> | |
| </Box> | |
| ) | |
| return ( | |
| <Box className={classes.editorContainer}> | |
| <div className={classes.textFieldContainer}> | |
| <TextField | |
| multiline | |
| value={value} | |
| onChange={(e) => onChange?.(e.target.value)} | |
| placeholder={placeholder} | |
| error={!!error} | |
| helperText={error} | |
| fullWidth | |
| variant="outlined" | |
| InputProps={{ | |
| style: { | |
| fontFamily: 'Consolas, Monaco, "Courier New", monospace', | |
| }, | |
| }} | |
| /> | |
| </div> | |
| </Box> | |
| ) |
🤖 Prompt for AI Agents
In sippy-ng/src/chat/YamlEditor.js around lines 97 to 117, the component calls
onChange unconditionally which can throw if the optional prop is not provided;
update the onChange handler to guard the call (e.g., check if typeof onChange
=== 'function' or use onChange && onChange(...)) or provide a default no-op prop
so the component only invokes onChange when it exists.
|
Scheduling required tests: |
|
@stbenjam: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Summary by CodeRabbit
Release Notes
New Features
Improvements
Dependencies