-
Notifications
You must be signed in to change notification settings - Fork 1
Configuration Guide
Complete reference for all OpenCode Memory configuration options.
Location: ~/.config/opencode/opencode-mem.jsonc
Format: JSONC (JSON with comments)
The configuration file is automatically created on first run with default values.
User-Scoped Memories Completely Removed:
All memories are now project-scoped by default. User preferences are managed exclusively through the automatic user profile system.
Removed:
-
scopeparameter from all memory operations -
maxProjectMemoriesconfig option (usemaxMemoriesinstead)
Renamed:
-
userMemoryAnalysisInterval→userProfileAnalysisInterval
Migration Required:
Remove scope parameter from all memory operations:
// OLD
memory({ mode: "add", content: "...", scope: "project" })
// NEW
memory({ mode: "add", content: "..." })User Profile System:
User-scoped memories are deprecated in favor of structured user profiles.
New Options:
userProfileMaxPreferencesuserProfileMaxPatternsuserProfileMaxWorkflowsuserProfileConfidenceDecayDaysuserProfileChangelogRetentionCount
Behavior Changes:
-
memory({ mode: "add", scope: "user" })now returns error -
memory({ mode: "profile" })returns new structure - User learning creates structured profile instead of individual memories
Migration: Existing user memories remain readable. Update code to use new profile structure.
Removed Options (no longer supported):
autoCaptureTokenThresholdautoCaptureMinTokensautoCaptureMaxMemoriesautoCaptureSummaryMaxLengthautoCaptureContextWindow
New Options:
memoryProvideruserProfileAnalysisIntervalautoCaptureMaxIterationsautoCaptureIterationTimeout
Migration required: Remove deprecated options from your config file.
{
"storagePath": "~/.opencode-mem/data",
"embeddingModel": "Xenova/nomic-embed-text-v1",
"embeddingDimensions": 768,
"embeddingApiUrl": "",
"embeddingApiKey": "",
"webServerEnabled": true,
"webServerPort": 4747,
"webServerHost": "127.0.0.1",
"maxVectorsPerShard": 50000,
"autoCleanupEnabled": true,
"autoCleanupRetentionDays": 30,
"deduplicationEnabled": true,
"deduplicationSimilarityThreshold": 0.9,
"autoCaptureEnabled": true,
"memoryProvider": "openai-chat",
"memoryModel": "gpt-4",
"memoryApiUrl": "https://api.openai.com/v1",
"memoryApiKey": "",
"userProfileAnalysisInterval": 10,
"userProfileMaxPreferences": 20,
"userProfileMaxPatterns": 15,
"userProfileMaxWorkflows": 10,
"userProfileConfidenceDecayDays": 30,
"userProfileChangelogRetentionCount": 5,
"autoCaptureMaxIterations": 5,
"autoCaptureIterationTimeout": 30000,
"similarityThreshold": 0.6,
"maxMemories": 10,
"injectProfile": true,
"keywordPatterns": [],
"containerTagPrefix": "opencode",
"maxProfileItems": 5
}Type: string
Default: ~/.opencode-mem/data
Database storage location. Supports tilde expansion for home directory.
{
"storagePath": "~/.opencode-mem/data"
}Custom location:
{
"storagePath": "/mnt/data/opencode-mem"
}Type: number
Default: 50000
Maximum vectors per database shard before creating a new shard.
{
"maxVectorsPerShard": 50000
}Higher values (100000+) reduce shard count but may slow down individual queries.
Lower values (25000) create more shards but keep each shard fast.
Type: string
Default: Xenova/nomic-embed-text-v1
Model name for generating embeddings.
Local models (no API required):
{
"embeddingModel": "Xenova/nomic-embed-text-v1"
}Supported local models:
-
Xenova/nomic-embed-text-v1(768 dimensions) -
Xenova/all-MiniLM-L6-v2(384 dimensions) -
Xenova/all-mpnet-base-v2(768 dimensions) -
Xenova/bge-small-en-v1.5(384 dimensions) -
Xenova/bge-base-en-v1.5(768 dimensions) -
Xenova/bge-large-en-v1.5(1024 dimensions)
API-based models:
{
"embeddingModel": "text-embedding-3-small",
"embeddingApiUrl": "https://api.openai.com/v1",
"embeddingApiKey": "sk-..."
}Type: number
Default: Auto-detected
Vector dimensions for the embedding model. Usually auto-detected.
{
"embeddingDimensions": 768
}Only set manually if using custom models or API endpoints.
Type: string
Default: ""
API endpoint for external embedding service (OpenAI-compatible).
{
"embeddingApiUrl": "https://api.openai.com/v1"
}Leave empty to use local models.
Type: string
Default: ""
API key for external embedding service.
{
"embeddingApiKey": "sk-your-api-key-here"
}Can also use environment variable OPENAI_API_KEY.
Type: boolean
Default: true
Enable or disable the web interface.
{
"webServerEnabled": true
}Set to false to disable web interface and save resources.
Type: number
Default: 4747
HTTP server port for web interface.
{
"webServerPort": 4747
}Change if port is already in use.
Type: string
Default: 127.0.0.1
Bind address for web server.
{
"webServerHost": "127.0.0.1"
}Use 0.0.0.0 to allow external access (not recommended for security).
Type: number
Default: 0.6
Range: 0.0 to 1.0
Minimum cosine similarity for search results.
{
"similarityThreshold": 0.6
}- Higher values (0.8+): More strict, fewer but more relevant results
- Lower values (0.4-0.6): More lenient, more results but less precise
Type: number
Default: 10
Maximum memories to inject into context and return in search results.
{
"maxMemories": 10
}Higher values provide more context but increase token usage. Lower values reduce context size.
Type: boolean
Default: true
Enable automatic memory capture from conversations.
{
"autoCaptureEnabled": true
}Requires memoryProvider, memoryModel, memoryApiUrl, and memoryApiKey to be configured.
Type: string
Default: openai-chat
AI provider for memory extraction.
{
"memoryProvider": "openai-chat"
}Supported providers:
-
openai-chat: OpenAI Chat Completions API -
anthropic: Anthropic Messages API -
openai-responses: OpenAI Responses API (experimental)
Type: string
Default: ""
Required: Yes (if auto-capture enabled)
AI model for memory extraction.
{
"memoryModel": "gpt-4"
}Supported models:
- OpenAI:
gpt-4,gpt-4-turbo,gpt-3.5-turbo - Anthropic:
claude-3-5-sonnet-20241022,claude-3-opus-20240229,claude-3-sonnet-20240229
Type: string
Default: ""
Required: Yes (if auto-capture enabled)
API endpoint for memory extraction.
{
"memoryApiUrl": "https://api.openai.com/v1"
}Other providers:
- Anthropic:
https://api.anthropic.com/v1 - Groq:
https://api.groq.com/openai/v1
Type: string
Default: ""
Required: Yes (if auto-capture enabled)
API key for memory extraction service.
{
"memoryApiKey": "sk-your-api-key-here"
}Type: number
Default: 10
Number of prompts between user learning analysis.
{
"userProfileAnalysisInterval": 10
}Set to 0 to disable user learning analysis.
Tuning:
- Lower (5): More frequent analysis, higher cost
- Default (10): Balanced
- Higher (20): Less frequent, lower cost
Type: number
Default: 5
Maximum retry iterations for capture.
{
"autoCaptureMaxIterations": 5
}Type: number
Default: 30000
Timeout per iteration in milliseconds.
{
"autoCaptureIterationTimeout": 30000
}Type: number
Default: 20
Maximum number of preferences to keep in user profile.
{
"userProfileMaxPreferences": 20
}Preferences are sorted by confidence score. Lower-confidence preferences are removed when limit is exceeded.
Type: number
Default: 15
Maximum number of patterns to keep in user profile.
{
"userProfileMaxPatterns": 15
}Patterns are sorted by frequency. Less frequent patterns are removed when limit is exceeded.
Type: number
Default: 10
Maximum number of workflows to keep in user profile.
{
"userProfileMaxWorkflows": 10
}Workflows are sorted by frequency. Less frequent workflows are removed when limit is exceeded.
Type: number
Default: 30
Days before preference confidence starts to decay.
{
"userProfileConfidenceDecayDays": 30
}Preferences not reinforced within this period will gradually lose confidence and may be removed.
Type: number
Default: 5
Number of profile versions to keep in changelog.
{
"userProfileChangelogRetentionCount": 5
}Older versions are automatically cleaned up. Useful for debugging and rollback.
Type: boolean
Default: true
Enable automatic cleanup of old memories.
{
"autoCleanupEnabled": true
}Type: number
Default: 30
Number of days to retain memories before cleanup.
{
"autoCleanupRetentionDays": 30
}Set to 0 to disable retention-based cleanup.
Protection: Pinned memories and memories linked to prompts are never deleted.
Type: boolean
Default: true
Enable automatic deduplication of similar memories.
{
"deduplicationEnabled": true
}Type: number
Default: 0.9
Range: 0.0 to 1.0
Similarity threshold for considering memories as duplicates.
{
"deduplicationSimilarityThreshold": 0.9
}Higher values (0.95+) are more conservative, lower values (0.85) are more aggressive.
Type: boolean
Default: true
Inject user profile summary into conversation context.
{
"injectProfile": true
}Type: string[]
Default: []
Custom regex patterns for keyword detection.
{
"keywordPatterns": [
"\\bimportant\\b",
"\\bcrucial\\b",
"\\bkey point\\b"
]
}Adds to default patterns (remember, memorize, etc.).
Type: string
Default: opencode
Tag prefix for container-based organization.
{
"containerTagPrefix": "opencode"
}Type: number
Default: 5
Maximum items to include in profile summary.
{
"maxProfileItems": 5
}Alternative to embeddingApiKey and memoryApiKey:
export OPENAI_API_KEY=sk-your-api-key-hereThe system validates configuration on startup and logs warnings for:
- Invalid port numbers
- Missing required fields (when features enabled)
- Invalid threshold values (must be 0.0-1.0)
- Invalid model names
- Inaccessible storage paths
- Deprecated options (v2.0)
Most configuration changes take effect immediately without restart:
- Search thresholds
- Memory limits
- Keyword patterns
- Maintenance settings
Requires restart:
- Storage path
- Embedding model
- Web server settings
- Database settings
{
"similarityThreshold": 0.7,
"maxMemories": 3,
"embeddingModel": "Xenova/all-MiniLM-L6-v2"
}{
"similarityThreshold": 0.5,
"maxMemories": 10,
"embeddingModel": "Xenova/bge-large-en-v1.5"
}{
"maxVectorsPerShard": 25000,
"autoCleanupRetentionDays": 14,
"deduplicationEnabled": true,
"embeddingModel": "Xenova/all-MiniLM-L6-v2"
}{
"memoryProvider": "openai-chat",
"memoryModel": "gpt-3.5-turbo",
"userProfileAnalysisInterval": 20,
"embeddingModel": "Xenova/all-MiniLM-L6-v2"
}Delete these from your config:
{
"autoCaptureTokenThreshold": 10000,
"autoCaptureMinTokens": 20000,
"autoCaptureMaxMemories": 10,
"autoCaptureSummaryMaxLength": 0,
"autoCaptureContextWindow": 3
}Add these to your config:
{
"memoryProvider": "openai-chat",
"userProfileAnalysisInterval": 10,
"autoCaptureMaxIterations": 5,
"autoCaptureIterationTimeout": 30000
}Check OpenCode logs for validation warnings.
memory({ mode: "auto-capture-stats" })- Memory Operations - Use the memory tool
- Auto-Capture System - Automatic memory extraction
- Web Interface - Unified timeline view
- Performance Tuning - Optimization strategies