Llmkit is a comprehensive toolkit for managing, testing, and deploying LLM prompts with a focus on versioning, evaluation, and developer-friendly workflows.
- Llmkit
Llmkit provides the best way to dynamically craft prompts with modern template syntax, manage, version, test, and run evals on your prompts across any provider and any model.
Our mission is to make prompt crafting dynamic, prompt management safe, and prompt evaluations simple.
- Template Variables: Dynamic system and user prompts with Liquid templating
- Prompt Evaluation: Create test sets and measure prompt performance
- Prompt Versioning: Track changes to prompts over time
- OpenAI Compatible API: Use with existing OpenAI client libraries
- Provider Integration: Support for multiple LLM providers with a unified API
Llmkit supports three types of prompts:
-
Static System Prompt: Basic prompt with fixed system instructions
- Great for simple chat interfaces
- No dynamic content needed
-
Dynamic System Prompt: System prompt with variable substitution
- Variables are inserted into the system prompt
- User messages can be free-form
-
Dynamic System & User Prompts: Both system and user templates
- Both system and user prompts can contain variables
- Ideal for structured inputs with consistent format
Llmkit uses a jinja style templating system syntax:
You are a helpful assistant named {{ assistant_name }}.
The user's name is {{ user_name }}.
{% if formal_tone %}
Please maintain a professional tone in your responses.
{% else %}
Feel free to be casual and friendly.
{% endif %}
Here are the topics to discuss:
{% for topic in topics %}
- {{ topic }}
{% endfor %}
Llmkit provides 100% compatible API endpoints matching OpenAI's API:
- Standard API:
/v1/chat/completions
- Streaming API:
/v1/chat/completions/stream
This means you can use any OpenAI client library with Llmkit:
from openai import OpenAI
# Point to Llmkit server
client = OpenAI(
api_key="llmkit_yourkey",
base_url="http://localhost:8000/v1",
)
# Use like normal OpenAI client
response = client.chat.completions.create(
model="YOUR-PROMPT-KEY", # Use your Llmkit prompt key as the model name
messages=[
{"role": "system", "content": '{"name": "Alex", "expertise": "AI"}'},
{"role": "user", "content": "Tell me about machine learning"}
]
)
Llmkit's evaluation system allows you to:
- Create evaluation test sets with specific inputs
- Run those inputs against different prompt versions
- Score and compare performance
- Track improvements over time
Llmkit's prompt testing system allows you to:
- Easily create any type of prompt (simple, dynamic)
- Test in chat or completion mode
- Input dynamic variables when needed
Every LLM call has a detailed trace that you can view. directly in the llmkit UI.
- Language: Rust
- Web Framework: Axum
- Database: SQLite with SQLx for type-safe queries
- Templating Engine: Tera templates
- Framework: Vue.js (Nuxt.js)
- Styling: Tailwind CSS
- Rust Toolchain: Latest stable version of Rust and Cargo
- OpenRouter API Key: You must have an OpenRouter API key to use Llmkit
- SQLite: For database functionality
- Node.js 16+ or Bun: For frontend development
- sqlx-cli: you can install this with
cargo install sqlx-cli
- Docker & Docker Compose: For containerized deployment
The easiest way to get started with Llmkit is using the llmkit
command:
- Install the command:
./install.sh
- Start the application:
llmkit start
- IMPORTANT: Set your OpenRouter API Key
- Edit the
.env
file in thebackend
directory - Add your OpenRouter API key:
OPENROUTER_API_KEY=your_key_here
- Restart Llmkit if it's already running
- Edit the
This command will:
- Create the SQLite database if it doesn't exist
- Run all necessary migrations
- Set up the .env file if it doesn't exist
- Start both the backend and frontend servers
The backend will be available at http://localhost:8000
and the UI at http://localhost:3000
.
If you prefer to set things up manually, follow these steps:
- Create a
.env
file in the backend directory:
cp .env.example backend/.env
- Edit the
.env
file with your OpenRouter API key and other settings:
RUST_LOG=info
DATABASE_URL="sqlite:absolute/path/to/backend/llmkit.db"
OPENROUTER_API_KEY=your_openrouter_key_here
JWT_SECRET=your_secure_random_string
- Start the server:
cd backend
cargo run
The server will start on http://localhost:8000
.
- Install dependencies:
cd ui
npm install # or bun install
- Start the development server:
npm run dev # or bun run dev
The UI will be available at http://localhost:3000
.
- Clone the repository:
git clone https://github.com/yourusername/llmkit.git
cd llmkit
- Create a
.env
file in the root directory:
cp .env.example .env
- Edit the
.env
file with your API keys and a secure JWT secret:
# Required
OPENROUTER_API_KEY=your_openrouter_key_here
JWT_SECRET=your_secure_random_string
# Optional - add only the providers you need
ANTHROPIC_API_KEY=your_anthropic_key
# etc.
- Build and start the containers:
docker-compose up -d
The backend will be available at http://localhost:8000
and the UI at http://localhost:3000
.
Feel free to fork and open a PR for any changes you want to see. Feel free to create an issue first to see if it's something we are interesting in supporting or working if you have a feature request or idea.