Skip to content

Configuration Guide

Zhafron Kautsar edited this page Jan 12, 2026 · 4 revisions

Configuration Guide

Complete reference for all OpenCode Memory configuration options.

Configuration File

Location: ~/.config/opencode/opencode-mem.jsonc

Format: JSONC (JSON with comments)

The configuration file is automatically created on first run with default values.

Breaking Changes (v2.3)

User-Scoped Memories Completely Removed:

All memories are now project-scoped by default. User preferences are managed exclusively through the automatic user profile system.

Removed:

  • scope parameter from all memory operations
  • maxProjectMemories config option (use maxMemories instead)

Renamed:

  • userMemoryAnalysisIntervaluserProfileAnalysisInterval

Migration Required:

// OLD
{
  "userMemoryAnalysisInterval": 10,
  "maxMemories": 5,
}

// NEW
{
  "userProfileAnalysisInterval": 10,
  "maxMemories": 10
}

Remove scope parameter from all memory operations:

// OLD
memory({ mode: "add", content: "...", scope: "project" })

// NEW
memory({ mode: "add", content: "..." })

Breaking Changes (v2.2)

User Profile System:

User-scoped memories are deprecated in favor of structured user profiles.

New Options:

  • userProfileMaxPreferences
  • userProfileMaxPatterns
  • userProfileMaxWorkflows
  • userProfileConfidenceDecayDays
  • userProfileChangelogRetentionCount

Behavior Changes:

  • memory({ mode: "add", scope: "user" }) now returns error
  • memory({ mode: "profile" }) returns new structure
  • User learning creates structured profile instead of individual memories

Migration: Existing user memories remain readable. Update code to use new profile structure.

Breaking Changes (v2.0)

Removed Options (no longer supported):

  • autoCaptureTokenThreshold
  • autoCaptureMinTokens
  • autoCaptureMaxMemories
  • autoCaptureSummaryMaxLength
  • autoCaptureContextWindow

New Options:

  • memoryProvider
  • userProfileAnalysisInterval
  • autoCaptureMaxIterations
  • autoCaptureIterationTimeout

Migration required: Remove deprecated options from your config file.

Complete Configuration Example

{
  "storagePath": "~/.opencode-mem/data",
  "embeddingModel": "Xenova/nomic-embed-text-v1",
  "embeddingDimensions": 768,
  "embeddingApiUrl": "",
  "embeddingApiKey": "",
  "webServerEnabled": true,
  "webServerPort": 4747,
  "webServerHost": "127.0.0.1",
  "maxVectorsPerShard": 50000,
  "autoCleanupEnabled": true,
  "autoCleanupRetentionDays": 30,
  "deduplicationEnabled": true,
  "deduplicationSimilarityThreshold": 0.9,
  "autoCaptureEnabled": true,
  "memoryProvider": "openai-chat",
  "memoryModel": "gpt-4",
  "memoryApiUrl": "https://api.openai.com/v1",
  "memoryApiKey": "",
  "userProfileAnalysisInterval": 10,
  "userProfileMaxPreferences": 20,
  "userProfileMaxPatterns": 15,
  "userProfileMaxWorkflows": 10,
  "userProfileConfidenceDecayDays": 30,
  "userProfileChangelogRetentionCount": 5,
  "autoCaptureMaxIterations": 5,
  "autoCaptureIterationTimeout": 30000,
  "similarityThreshold": 0.6,
  "maxMemories": 10,
  "injectProfile": true,
  "keywordPatterns": [],
  "containerTagPrefix": "opencode",
  "maxProfileItems": 5
}

Storage Settings

storagePath

Type: string
Default: ~/.opencode-mem/data

Database storage location. Supports tilde expansion for home directory.

{
  "storagePath": "~/.opencode-mem/data"
}

Custom location:

{
  "storagePath": "/mnt/data/opencode-mem"
}

maxVectorsPerShard

Type: number
Default: 50000

Maximum vectors per database shard before creating a new shard.

{
  "maxVectorsPerShard": 50000
}

Higher values (100000+) reduce shard count but may slow down individual queries.
Lower values (25000) create more shards but keep each shard fast.

Embedding Settings

embeddingModel

Type: string
Default: Xenova/nomic-embed-text-v1

Model name for generating embeddings.

Local models (no API required):

{
  "embeddingModel": "Xenova/nomic-embed-text-v1"
}

Supported local models:

  • Xenova/nomic-embed-text-v1 (768 dimensions)
  • Xenova/all-MiniLM-L6-v2 (384 dimensions)
  • Xenova/all-mpnet-base-v2 (768 dimensions)
  • Xenova/bge-small-en-v1.5 (384 dimensions)
  • Xenova/bge-base-en-v1.5 (768 dimensions)
  • Xenova/bge-large-en-v1.5 (1024 dimensions)

API-based models:

{
  "embeddingModel": "text-embedding-3-small",
  "embeddingApiUrl": "https://api.openai.com/v1",
  "embeddingApiKey": "sk-..."
}

embeddingDimensions

Type: number
Default: Auto-detected

Vector dimensions for the embedding model. Usually auto-detected.

{
  "embeddingDimensions": 768
}

Only set manually if using custom models or API endpoints.

embeddingApiUrl

Type: string
Default: ""

API endpoint for external embedding service (OpenAI-compatible).

{
  "embeddingApiUrl": "https://api.openai.com/v1"
}

Leave empty to use local models.

embeddingApiKey

Type: string
Default: ""

API key for external embedding service.

{
  "embeddingApiKey": "sk-your-api-key-here"
}

Can also use environment variable OPENAI_API_KEY.

Web Server Settings

webServerEnabled

Type: boolean
Default: true

Enable or disable the web interface.

{
  "webServerEnabled": true
}

Set to false to disable web interface and save resources.

webServerPort

Type: number
Default: 4747

HTTP server port for web interface.

{
  "webServerPort": 4747
}

Change if port is already in use.

webServerHost

Type: string
Default: 127.0.0.1

Bind address for web server.

{
  "webServerHost": "127.0.0.1"
}

Use 0.0.0.0 to allow external access (not recommended for security).

Search Settings

similarityThreshold

Type: number
Default: 0.6
Range: 0.0 to 1.0

Minimum cosine similarity for search results.

{
  "similarityThreshold": 0.6
}
  • Higher values (0.8+): More strict, fewer but more relevant results
  • Lower values (0.4-0.6): More lenient, more results but less precise

maxMemories

Type: number Default: 10

Maximum memories to inject into context and return in search results.

{
  "maxMemories": 10
}

Higher values provide more context but increase token usage. Lower values reduce context size.

Auto-Capture Settings

autoCaptureEnabled

Type: boolean
Default: true

Enable automatic memory capture from conversations.

{
  "autoCaptureEnabled": true
}

Requires memoryProvider, memoryModel, memoryApiUrl, and memoryApiKey to be configured.

memoryProvider

Type: string
Default: openai-chat

AI provider for memory extraction.

{
  "memoryProvider": "openai-chat"
}

Supported providers:

  • openai-chat: OpenAI Chat Completions API
  • anthropic: Anthropic Messages API
  • openai-responses: OpenAI Responses API (experimental)

memoryModel

Type: string
Default: ""
Required: Yes (if auto-capture enabled)

AI model for memory extraction.

{
  "memoryModel": "gpt-4"
}

Supported models:

  • OpenAI: gpt-4, gpt-4-turbo, gpt-3.5-turbo
  • Anthropic: claude-3-5-sonnet-20241022, claude-3-opus-20240229, claude-3-sonnet-20240229

memoryApiUrl

Type: string
Default: ""
Required: Yes (if auto-capture enabled)

API endpoint for memory extraction.

{
  "memoryApiUrl": "https://api.openai.com/v1"
}

Other providers:

  • Anthropic: https://api.anthropic.com/v1
  • Groq: https://api.groq.com/openai/v1

memoryApiKey

Type: string
Default: ""
Required: Yes (if auto-capture enabled)

API key for memory extraction service.

{
  "memoryApiKey": "sk-your-api-key-here"
}

userProfileAnalysisInterval

Type: number
Default: 10

Number of prompts between user learning analysis.

{
  "userProfileAnalysisInterval": 10
}

Set to 0 to disable user learning analysis.

Tuning:

  • Lower (5): More frequent analysis, higher cost
  • Default (10): Balanced
  • Higher (20): Less frequent, lower cost

autoCaptureMaxIterations

Type: number
Default: 5

Maximum retry iterations for capture.

{
  "autoCaptureMaxIterations": 5
}

autoCaptureIterationTimeout

Type: number
Default: 30000

Timeout per iteration in milliseconds.

{
  "autoCaptureIterationTimeout": 30000
}

userProfileMaxPreferences

Type: number
Default: 20

Maximum number of preferences to keep in user profile.

{
  "userProfileMaxPreferences": 20
}

Preferences are sorted by confidence score. Lower-confidence preferences are removed when limit is exceeded.

userProfileMaxPatterns

Type: number
Default: 15

Maximum number of patterns to keep in user profile.

{
  "userProfileMaxPatterns": 15
}

Patterns are sorted by frequency. Less frequent patterns are removed when limit is exceeded.

userProfileMaxWorkflows

Type: number
Default: 10

Maximum number of workflows to keep in user profile.

{
  "userProfileMaxWorkflows": 10
}

Workflows are sorted by frequency. Less frequent workflows are removed when limit is exceeded.

userProfileConfidenceDecayDays

Type: number
Default: 30

Days before preference confidence starts to decay.

{
  "userProfileConfidenceDecayDays": 30
}

Preferences not reinforced within this period will gradually lose confidence and may be removed.

userProfileChangelogRetentionCount

Type: number
Default: 5

Number of profile versions to keep in changelog.

{
  "userProfileChangelogRetentionCount": 5
}

Older versions are automatically cleaned up. Useful for debugging and rollback.

Maintenance Settings

autoCleanupEnabled

Type: boolean
Default: true

Enable automatic cleanup of old memories.

{
  "autoCleanupEnabled": true
}

autoCleanupRetentionDays

Type: number
Default: 30

Number of days to retain memories before cleanup.

{
  "autoCleanupRetentionDays": 30
}

Set to 0 to disable retention-based cleanup.

Protection: Pinned memories and memories linked to prompts are never deleted.

deduplicationEnabled

Type: boolean
Default: true

Enable automatic deduplication of similar memories.

{
  "deduplicationEnabled": true
}

deduplicationSimilarityThreshold

Type: number
Default: 0.9
Range: 0.0 to 1.0

Similarity threshold for considering memories as duplicates.

{
  "deduplicationSimilarityThreshold": 0.9
}

Higher values (0.95+) are more conservative, lower values (0.85) are more aggressive.

Advanced Settings

injectProfile

Type: boolean
Default: true

Inject user profile summary into conversation context.

{
  "injectProfile": true
}

keywordPatterns

Type: string[]
Default: []

Custom regex patterns for keyword detection.

{
  "keywordPatterns": [
    "\\bimportant\\b",
    "\\bcrucial\\b",
    "\\bkey point\\b"
  ]
}

Adds to default patterns (remember, memorize, etc.).

containerTagPrefix

Type: string
Default: opencode

Tag prefix for container-based organization.

{
  "containerTagPrefix": "opencode"
}

maxProfileItems

Type: number
Default: 5

Maximum items to include in profile summary.

{
  "maxProfileItems": 5
}

Environment Variables

OPENAI_API_KEY

Alternative to embeddingApiKey and memoryApiKey:

export OPENAI_API_KEY=sk-your-api-key-here

Configuration Validation

The system validates configuration on startup and logs warnings for:

  • Invalid port numbers
  • Missing required fields (when features enabled)
  • Invalid threshold values (must be 0.0-1.0)
  • Invalid model names
  • Inaccessible storage paths
  • Deprecated options (v2.0)

Hot Reload

Most configuration changes take effect immediately without restart:

  • Search thresholds
  • Memory limits
  • Keyword patterns
  • Maintenance settings

Requires restart:

  • Storage path
  • Embedding model
  • Web server settings
  • Database settings

Performance Tuning

For Speed

{
  "similarityThreshold": 0.7,
  "maxMemories": 3,
  "embeddingModel": "Xenova/all-MiniLM-L6-v2"
}

For Accuracy

{
  "similarityThreshold": 0.5,
  "maxMemories": 10,
  "embeddingModel": "Xenova/bge-large-en-v1.5"
}

For Low Memory

{
  "maxVectorsPerShard": 25000,
  "autoCleanupRetentionDays": 14,
  "deduplicationEnabled": true,
  "embeddingModel": "Xenova/all-MiniLM-L6-v2"
}

For Low Cost

{
  "memoryProvider": "openai-chat",
  "memoryModel": "gpt-3.5-turbo",
  "userProfileAnalysisInterval": 20,
  "embeddingModel": "Xenova/all-MiniLM-L6-v2"
}

Migration from v1.x

Step 1: Remove Deprecated Options

Delete these from your config:

{
  "autoCaptureTokenThreshold": 10000,
  "autoCaptureMinTokens": 20000,
  "autoCaptureMaxMemories": 10,
  "autoCaptureSummaryMaxLength": 0,
  "autoCaptureContextWindow": 3
}

Step 2: Add New Options

Add these to your config:

{
  "memoryProvider": "openai-chat",
  "userProfileAnalysisInterval": 10,
  "autoCaptureMaxIterations": 5,
  "autoCaptureIterationTimeout": 30000
}

Step 3: Verify Configuration

Check OpenCode logs for validation warnings.

Step 4: Test Auto-Capture

memory({ mode: "auto-capture-stats" })

Next Steps

Clone this wiki locally