A premium, frontend-only web application designed to help you craft, refine, and revise your LLM prompts using a local AI server.
- Local & Private: Runs entirely in your browser and connects to your local LLM (LM Studio, Ollama, etc.). No API keys required.
- Smart Optimization: Turns freeform ideas into structured, professional prompts (YAML + Markdown).
- Refinement Chat: Discuss and plan improvements to your optimized prompt through an interactive chat interface. The chat provides context-aware suggestions to help you evaluate and iterate on your prompt without making direct changes until you click "Refine".
- Resizable UI: Adjust the split between the input area and chat window using the draggable resize handle, allowing you to customize your workspace layout.
- Result History: Navigate through previous versions of your optimized prompt to compare results.
- Premium UI: Features a modern glassmorphic design with dark mode and smooth animations.
You need a local LLM server running that is compatible with the OpenAI API format.
-
LM Studio (Recommended):
- Start the Local Server.
- Default URL:
http://localhost:1234/v1 - Note: CORS is generally required if the LLM is running on a different machine.
-
- Start the server:
./server -m path/to/model.gguf --port 8080 - Default URL:
http://localhost:8080/v1 - Model Name: The model name entered in settings is ignored by the standard llama.cpp server; it will always use the model loaded at startup.
- Start the server:
-
- Run
ollama serve. - Default URL:
http://localhost:11434/v1
- Run
- Open the App: Simply open
index.htmlin your web browser. No installation or build server needed. - Configure API:
- Click the Settings (Gear) icon in the top right.
- Enter your local server URL (e.g.,
http://localhost:1234/v1). - Enter a model name (or click the refresh icon to fetch available models).
- Click Save.
- Optimize a Prompt:
- Type your idea in the main input box (e.g., "Write a prompt to create a python script for a snake game").
- Or, paste an existing prompt into the main input box.
- Click Optimize.
- The structured result will appear in the right panel.
- Refine with Chat:
- Use the chat window at the bottom left to ask for changes (e.g., "Make it object-oriented").
- Click Refine to update the result based on the chat context.
- Browse History:
- Use the
<and>arrows in the output header to view previous versions. - If you want to revert to a previous version, simply arrow to the place you want to resume, and continue to refine from there.
- Use the
All stored information (API settings, chat history, and optimization results) is only stored in your browser's local cache. This data persists through page reloads for your convenience, but can easily be erased by clearing your browser's cache/site data. No information is ever sent to external servers except your local LLM endpoint.
- "API Error" or No Response:
- Ensure your local server is running.
- Check the Console (F12) for CORS errors. If you see CORS issues, ensure your local server allows connections from
null(file origin) or your local server address.
- Settings Button Not Working:
- Refresh the page. Ensure JavaScript is enabled.
MIT
- PII Safety Audit
- Using localhost, this app does not access the internet, and does not collect any data.
