The spellbook for your codebase — chronicle decisions, context, and lessons your AI companions can actually read.
Lore is a local AI memory system for software projects. It stores what you know as plain YAML alongside your code — then publishes that knowledge as instruction files that GitHub Copilot, Claude, Cursor, Codex, and other AI tools read automatically.
No external database. No API keys. No cloud sync. Everything lives in .lore/ next to your code.
By default, lore exports CHRONICLE.md plus all agent adapter files so security and instruction preambles are consistently written for every tool.
AI coding tools are stateless — they don't remember why you chose PostgreSQL over SQLite, that the auth layer must never bypass JWT validation, or that the frontend team deprecated the v1 API six months ago. You end up re-explaining the same context in every session.
Lore fixes that. You capture knowledge once; every AI session inherits it automatically.
Your decisions, facts, and lessons
↓ lore add / lore relic
.lore/ (plain YAML)
↓ lore export
CHRONICLE.md ← full project memory (one source of truth)
↓ referenced by lean instruction files:
copilot-instructions.md · AGENTS.md · CLAUDE.md · .cursor/rules/memory.md
↓ on-demand:
/lore → reads CHRONICLE.md into AI context when you ask
↓
Every AI tool reads your repo context — without you repeating yourself
A single piece of knowledge: a decision, a fact, a hard-won lesson. Short, specific, retrievable by semantic search.
lore add decisions "Use PostgreSQL — we need JSONB and row-level locking"
lore add facts "Auth service is the sole issuer of JWTs — never bypass it"
lore add preferences "Prefer explicit over clever — this codebase has many contributors"A named collection of spells. Default tomes: decisions, facts, preferences, summaries. You can add your own in .lore/config.yaml.
Tomes are just directory names — each spell is one YAML file filed under its tome.
A raw artifact saved as-is for later processing. Use a relic when things are moving fast and you don't have time to curate.
Capture a relic now → distill spells from it later.
A relic can be anything: a pasted session log, a git diff, a doc excerpt from Confluence, a long Slack thread. It lands in .lore/relics/ untouched. When you have time, you open it with lore relic distill and choose exactly which parts become proper spells.
lore export writes your spells into the files AI tools pick up automatically, using a two-layer architecture:
CHRONICLE.md — the single source of truth. Contains every spell grouped by tome. All lean instruction files reference it.
| Lean instruction file | Tool | On by default |
|---|---|---|
CHRONICLE.md |
All tools (full memory) | ✅ |
.github/copilot-instructions.md |
GitHub Copilot | ✅ |
AGENTS.md |
OpenAI Codex, agent frameworks | ✅ |
CLAUDE.md |
Anthropic Claude | ✅ |
.cursor/rules/memory.md |
Cursor | ✅ |
.github/prompts/lore.prompt.md |
/lore trigger in Copilot Chat |
✅ |
.windsurfrules |
Windsurf / Codeium | ✅ |
GEMINI.md |
Gemini CLI | ✅ |
.clinerules |
Cline | ✅ |
CONVENTIONS.md |
Aider | ✅ |
Lean instruction files are intentionally small — they contain your project description, security preamble, and a single line telling the AI to read CHRONICLE.md for full context. This keeps per-request token overhead minimal.
To disable targets after onboarding, set them in .lore/config.yaml:
export_targets:
windsurf: false
gemini: false
cline: false
aider: falseExports are atomic — a crash mid-write never leaves a partial file.
pip install lore-bookIf you publish the Scoop bucket, this becomes the cleanest install path for Windows users:
scoop bucket add cptplastic https://github.com/CptPlastic/scoop-lore-book
scoop install lore-bookUntil that bucket is live, use pipx:
Use the Python launcher so the command works consistently across Windows setups:
py -m pip install --upgrade lore-bookIf you prefer isolated CLI installs, pipx is the smoothest option on Windows:
py -m pip install --user pipx
py -m pipx ensurepath
pipx install lore-bookIf you are installing from this repository, use the bootstrap script:
powershell -ExecutionPolicy Bypass -File .\scripts\install_windows.ps1Optional modes:
# Force plain pip install mode
powershell -ExecutionPolicy Bypass -File .\scripts\install_windows.ps1 -Mode pip
# Install from local source path via pipx
powershell -ExecutionPolicy Bypass -File .\scripts\install_windows.ps1 -SourcePath .Scoop is the intended easiest Windows package-manager path once your bucket is published.
Until then, pipx install lore-book is the easiest path for most users.
This repo now generates Windows packaging artifacts during release:
- Scoop manifest:
packaging/scoop/lore-book.json - winget submission helper:
packaging/winget/submission-<version>.md
After each release, use these to publish:
- Submit
packaging/scoop/lore-book.jsonto your Scoop bucket repository. - Use
packaging/winget/submission-<version>.mdto open/update a PR inmicrosoft/winget-pkgs.
A ready-to-push Scoop bucket template is included in packaging/scoop-bucket-template/.
This repository includes a workflow that can open a PR against CptPlastic/scoop-lore-book whenever packaging/scoop/lore-book.json changes:
- Workflow:
.github/workflows/sync-scoop-bucket.yml - Required secret:
SCOOP_BUCKET_PAT
SCOOP_BUCKET_PAT should have write access to CptPlastic/scoop-lore-book (classic repo scope, or fine-grained token with contents + pull_requests write).
For local development:
pip install -e .Requirements: Python 3.10+. Lore works out of the box with TF-IDF search. Dense vector search (sentence-transformers) is optional and can be enabled with the setup wizard below.
Want dense vector search? Run:
lore setup semanticThe wizard will:
- check whether
sentence-transformersis installed - offer to install semantic dependencies if missing
- validate model loading with your configured
embedding_model,model_endpoint, and SSL settings
If you prefer non-interactive setup:
lore setup semantic --install-nowDefault endpoint for Hugging Face models is:
https://huggingface.co
If dense model loading fails, lore automatically falls back to TF-IDF so search still works.
This repository now includes a plain static docs site in docs/ for GitHub Pages.
Preview locally:
cd docs
python -m http.server 8000Then open:
http://localhost:8000
Publishing is handled by .github/workflows/docs.yml on pushes to main/master.
To migrate existing docs pages into this repo, add or edit static files under docs/.
lore onboardThe onboarding command explains every concept, walks you through store setup, security policy, your first spell, and publishing — with an interactive step-by-step flow. It also prompts for a project description (auto-detected from pyproject.toml or README) that appears at the top of every lean instruction file. Start here if you're new.
Onboard/Init also add local adapter files to .gitignore by default, so only shared chronicle memory is committed unless you choose otherwise.
# Interactive, step-by-step
lore add
# Save and auto-link related spells
lore add decisions "Use FastAPI — async support + automatic OpenAPI docs" --tags api,backend --auto-associate
# One-liner (scriptable, CI-friendly)
lore add decisions "Use FastAPI — async support + automatic OpenAPI docs"
lore add preferences "Always use type hints" --tags style,python
lore add facts "Minimum supported Python is 3.10"
# Suggest or apply related links for an existing spell
lore associate <id>
lore associate <id> --apply
# Semantic search — finds conceptually related spells, not just keyword matches
lore search "why did we choose FastAPI"
lore search "authentication strategy"
# List all spells
lore list
# List by tome
lore list decisions
# Delete a spell
lore remove <id>Spell IDs are short UUID prefixes. lore list shows them.
Lore can now suggest relationship links between spells automatically.
# During add
lore add facts "Scoop installer uses local extracted payload" --tags windows,scoop --auto-associate
# Tune association behavior
lore add decisions "..." --auto-associate --associate-top 5 --associate-min-score 0.30
# Existing spells
lore associate a1b2c3d4
lore associate a1b2c3d4 --applyHow scoring works:
- semantic similarity score (dense embeddings or TF-IDF fallback)
- shared tag bonus
- same-category bonus
Applied links are symmetric in related_to so both spells reference each other.
The 1.4.0 release adds relationship metadata, linting, trust history, and configurable extraction patterns.
You can link memories together, mark deprecations, and set review dates.
# Interactive (includes metadata steps)
lore add
# Scriptable form
lore add decisions "Use OAuth device flow for CLI auth" \
--depends-on 02e0914c,6c8887fe \
--related-to e07d12be \
--review-date 2026-06-01
# Mark a memory as deprecated later
lore edit e07d12beMetadata fields:
depends_on- memory IDs this entry relies onrelated_to- memory IDs that are contextually relateddeprecated- lifecycle flag for old guidancereview_date- date to revisit stale decisions
# Show findings only
lore lint
# Fail CI on errors
lore lint --fail-on error
# Fail CI on errors and warnings
lore lint --fail-on error,warninglore lint checks:
- empty memory content
- likely duplicates
- invalid instruction scope tags
- broken relationship references
- invalid or past review dates
- invalid deprecated flag types
- trust score threshold warnings
# Recompute trust from git signals
lore trust refresh
# Explain one memory's trust and see recent history
lore trust explain e07d12beEach refresh writes rolling trust snapshots (score, level, reasons, timestamp) so score changes are auditable.
Use the new interactive pattern manager:
lore setup extract-patternsThis lets you add regex or prefix rules that auto-categorize commit-derived memories during extraction.
Then extract as usual:
lore extract --last 20 --autoExample conventions:
chore(release):->decisionsdocs:->summariesfix:->facts
Use relics when you want to preserve raw information without slowing down to decide what matters.
# Paste session notes interactively (enter . to finish)
lore relic capture
# Pull in a file — meeting notes, spec doc, wiki export
lore relic capture --file session-notes.md --title "Auth redesign session"
# Snapshot the current working-tree + staged diff
lore relic capture --git-diff --title "Pre-deploy changes"
# Capture the last N commits (messages + diffs)
lore relic capture --git-log 5 --title "Sprint 12 wrap-up"
# Read from clipboard (Windows: PowerShell Get-Clipboard, macOS: pbpaste, Linux: xclip)
lore relic capture --clipboard --title "Slack thread on rate limiting"
# Pipe anything in
git log --oneline -20 | lore relic capture --stdin --title "Recent commit history"
cat confluence-export.txt | lore relic capture --stdin --title "Architecture decision"
# Browse relics (shows preview of content)
lore relic list
# Read one in full
lore relic view a3f1b2c4
# Distill the good parts into spells
lore relic distill a3f1b2c4
# Delete a relic
lore relic remove a3f1b2c4lore relic distill shows you the relic content and walks you through extracting spells one at a time:
─── Spell #1 ────────────────────────────────────────────
✦ Inscription the wisdom to enshrine (. to seal the book): We chose CQRS to
separate read and write models after hitting contention on the orders table
✦ Tome which grimoire? [decisions]:
✓ Spell a1b2c3d4 sealed into decisions.
─── Spell #2 ────────────────────────────────────────────
...
Each spell links back to its source relic. The tome selection is sticky — after you choose decisions for spell #1, it defaults to decisions for spell #2. Enter . to finish.
# Write all enabled context files (default: CHRONICLE + all adapters)
lore export
# Write one target only
lore export --format chronicle
lore export --format copilot
lore export --format agents
lore export --format claude
lore export --format cursor
lore export --format prompt # .github/prompts/lore.prompt.md
lore export --format windsurf # requires: windsurf: true in .lore/config.yaml
lore export --format gemini
lore export --format cline
lore export --format aiderIf no project_description is set, lore export will remind you to run lore onboard — lean instruction files are more useful with a one-line project summary at the top.
Exports are regenerated every run. Adapter files are gitignored by default, so teams can commit only CHRONICLE.md unless they opt into versioning adapter files.
The prompt export target writes .github/prompts/lore.prompt.md. In GitHub Copilot Chat, type /lore to invoke it — the AI will read CHRONICLE.md and surface context relevant to your current task. No setup beyond running lore export.
Treat lore memory as layered trust, even before advanced trust metadata exists:
- Shared trusted memory (commit this)
CHRONICLE.mdis your canonical reviewed memory.- Only include decisions/facts you want every collaborator and agent to inherit.
- Local working memory (do not commit)
- Keep generated adapter files (
AGENTS.md,CLAUDE.md,.github/copilot-instructions.md, etc.) local by default. - Use them as personal tool wrappers around the same shared chronicle.
- Raw untrusted intake
- Capture noisy inputs as relics first (
lore relic capture). - Distill only verified points into spells (
lore relic distill).
- Practical review loop
- Add candidate memory.
- Validate against code/tests/docs.
- Export chronicle.
- Commit
CHRONICLE.mdonly when reviewed.
- Trust signals you can use today
- Reserve
decisionsandfactsfor high-confidence entries. - Use tags to mark confidence state (for example:
verified,needs-review,deprecated). - Move or remove stale entries quickly with
lore remove+ re-add in correct form.
You can auto-score existing memories from git metadata (author, source, activity, tags):
lore trust refreshPreview only (no writes):
lore trust refresh --dry-runExplain one memory's score:
lore trust explain <id>
lore trust explain <id> --recomputeTune trust thresholds in .lore/config.yaml:
trust:
default_score: 50
chronicle_min_score: 60
trusted_authors:
- "Your Name"
author_weights:
"Release Bot": 10When chronicle_min_score is greater than 0, lore export includes only entries at or above that score in CHRONICLE.md.
lore hook install opens an interactive wizard that installs a .git/hooks/post-commit script:
lore hook installThe wizard asks whether you want:
- Auto-extract — scan each new commit message for decisions and facts, store them automatically
- Auto-export — regenerate all AI context files after every commit so they're always current
The generated hook is clearly marked # Installed by lore. Remove it safely with:
lore hook uninstallLore-managed hooks include a re-entry guard and shared lock (.git/.lore-hook.lock) so running both
post-commit and post-merge hooks does not recurse.
Install a post-merge hook that syncs shared CHRONICLE.md updates into local .lore/ whenever pull/merge changes the chronicle:
lore hook sync-installRemove it safely with:
lore hook sync-uninstall# Extract from the last 20 commits
lore extract --last 20Lore scans commit messages for structured knowledge and adds it to your store.
If a teammate updates CHRONICLE.md and you pull those changes, import them back into your local .lore/ store with:
lore syncUseful flags:
# Preview only
lore sync --dry-run
# Import from a different markdown file
lore sync --file ./path/to/CHRONICLE.md
# Import only (skip export pass)
lore sync --no-exportlore sync deduplicates by category/content (and scope tags for instructions), so re-running it is safe.
If you run the watcher daemon, CHRONICLE sync can happen automatically:
lore awakenBy default, lore awaken watches .lore/ changes for export and also watches CHRONICLE.md to import/sync changes back into .lore.
Config toggles:
# Disable CHRONICLE auto-sync while daemon is running
lore config auto_sync_chronicle false
# Re-enable it
lore config auto_sync_chronicle trueYou can also override per run:
lore awaken --no-sync-chronicleAutomatic memory linking can be configured globally in .lore/config.yaml:
association:
enabled: true
auto_apply_min_score: 0.55
suggest_min_score: 0.35
max_links_per_memory: 3
stages:
add: true
edit: true
sync: true
extract: true
watch: falseWatch-mode linking is available but off by default:
lore awaken --associateharmonize is the contradiction-checking and rollup feature.
It does two things in one pass:
- clusters closely related spells into candidate summary rollups
- flags likely contradictions across spells in the same tome
By default it is report-only and writes nothing.
If you want a guided setup flow instead of editing YAML directly:
lore setup harmonizeThe setup wizard walks through each option interactively. During the AI summary step it checks your environment for LORE_AI_API_KEY or OPENAI_API_KEY and shows a clear status — green if a key is found, yellow with setup instructions if not. The extra AI fields (model, URL, timeout) only appear when you opt in.
# Preview rollups + contradictions
lore harmonize
# Broaden candidate recall
lore harmonize --min-score 0.50 --contradiction-min-confidence 0.60
# Persist rollup summaries
lore harmonize --apply
# Persist rollups and contradiction-resolution suggestions
lore harmonize --apply --apply-resolutionsThe current implementation is intentionally conservative:
- source spells are never overwritten
- harmonize maintains one rolling
harmonize:snapshotsummary entry by default - created summaries are linked back to their source spells unless disabled
When a snapshot already exists, harmonize updates it in place so Chronicle stays compact. Legacy harmonize rollup/resolution entries are pruned during apply.
Harmonize can also run from the daemon:
# Watch .lore and auto-create harmonized rollups
lore awaken --harmonize
# Also persist contradiction-resolution suggestions while watching
lore awaken --harmonize --harmonize-apply-resolutionsThis is disabled by default because background harmonization can create noisy summaries in fast-moving stores.
Add a harmonize block to .lore/config.yaml:
harmonize:
enabled: true
watch: false
top_k: 3
min_score: 0.62
max_rollups: 10
contradiction_min_confidence: 0.67
suggest_resolutions: true
apply_resolutions: false
ai_summary:
enabled: false
model: gpt-4o-mini
base_url: https://api.openai.com/v1
timeout_seconds: 12
max_output_tokens: 260
max_chars: 1400Field meanings:
enabled- global on/off switch for harmonize featureswatch- allowlore awakento run harmonize automatically without passing--harmonizetop_k- how many related spells are considered per harmonize anchormin_score- minimum association score required before a rollup candidate is proposedmax_rollups- cap on generated rollup candidates per runcontradiction_min_confidence- minimum confidence required before a contradiction is shownsuggest_resolutions- include resolution suggestions in report modeapply_resolutions- when harmonize writes during watch mode, also persist resolution suggestionsai_summary.enabled- when true, harmonize asks an AI model to write the snapshot textai_summary.model- chat-completions model name used for AI snapshot generationai_summary.base_url- OpenAI-compatible API base URLai_summary.timeout_seconds- request timeout for AI snapshot generationai_summary.max_output_tokens- response token budget for generated snapshot textai_summary.max_chars- hard cap for stored snapshot size
For API credentials, set one of these environment variables before running harmonize:
export LORE_AI_API_KEY=sk-... # preferred
# or
export OPENAI_API_KEY=sk-...lore setup harmonize will tell you if a key is detected in your environment. Without a key, AI summary silently falls back to the local deterministic snapshot — no errors, no noise.
For small, curated stores:
harmonize:
top_k: 2
min_score: 0.70
max_rollups: 6
contradiction_min_confidence: 0.75
suggest_resolutions: true
apply_resolutions: falseUse this when the store is mostly high-signal and you want only the strongest findings.
For medium team stores:
harmonize:
top_k: 3
min_score: 0.62
max_rollups: 10
contradiction_min_confidence: 0.67
suggest_resolutions: true
apply_resolutions: falseThis is the default balance for most repos.
For noisy or rapidly changing stores:
harmonize:
top_k: 4
min_score: 0.50
max_rollups: 15
contradiction_min_confidence: 0.58
suggest_resolutions: true
apply_resolutions: falseUse this when you want broader recall and are willing to review more false positives.
- Raise
min_scoreif rollups are grouping unrelated spells. - Lower
min_scoreif harmonize is missing obvious sibling spells. - Raise
contradiction_min_confidenceif contradiction reports feel noisy. - Lower
contradiction_min_confidenceif you want earlier warning on drifting facts. - Keep
apply_resolutionsoff until your store has stable review habits.
# Check for updates
lore update --check-only
# Prompted install
lore update
# Non-interactive install
lore update --yesOptional startup auto-update toggle:
lore config auto_update trueWith auto_update enabled, running plain lore will check PyPI and auto-install newer versions when available.
lore security configures a security preamble injected at the top of every export. This ensures every AI tool that reads your repo context also receives your security constraints before anything else.
lore securityThe preamble can include:
- OWASP Top 10 reference (prevents the classics: injection, broken auth, SSRF, etc.)
- Security policy file link (e.g.
SECURITY.md) - CODEOWNERS notice — warns that sensitive paths need human review
- Custom rules — any project-specific edicts ("Never disable SSL verification", "All secrets via env vars", etc.)
This is especially useful in GitHub Enterprise environments where Copilot should always be reminded of your security posture before providing suggestions.
.lore/
config.yaml ← store settings, categories, model config, security
decisions/ ← why things were built a certain way
facts/ ← project context, constraints, team conventions
preferences/ ← coding style, tooling choices
summaries/ ← AI session summaries, sprint recaps
relics/ ← raw captured artifacts (sessions, diffs, docs)
embeddings/
index.json ← local semantic search index (no external DB)
Each spell and relic is a plain YAML file. No database engine, no lock files, no proprietary format. You can read, edit, and commit them directly.
.lore/ is automatically added to .gitignore on lore init. Local adapter exports are also gitignored by default so teams can commit only CHRONICLE.md as shared memory.
lore uiA retro phosphor-green terminal browser for searching, reading, adding, and exporting memories. Live-reloads whenever .lore/ files change on disk — open it in a split pane while you work.
TUI keys:
aaddueditddeletexsuggest/apply associations for selected spellhrun harmonize preview/apply (rollups + contradiction checks)gdependency mapeexport
# Start watching — auto-exports on every .lore change
lore awaken
# Run in background
lore awaken --background
# Optional: auto-harmonize while watching
lore awaken --harmonize
# Stop the daemon
lore slumberThe daemon watches .lore/ with filesystem events and regenerates all export files the moment any spell or config changes. Zero-friction — add a spell, your AI tools get it immediately.
lore doctor
# Pretty machine-readable report
lore doctor --json
# Compact one-line JSON for CI logs/parsers
lore doctor --json-compact
# Fail unless status is healthy
lore doctor --json --strict
# Bound semantic probe runtime (seconds)
lore doctor --json --model-timeout 8
# Apply safe automatic repairs before reporting
lore doctor --fix
# Preview repairs without applying them
lore doctor --fix-dry-run --jsonReports:
- Whether the
.lore/store exists and is readable - Which semantic search mode is active (embedding model or TF-IDF fallback)
- Whether the configured model endpoint is reachable
- Counts of spells by tome and relics
JSON output includes a top-level status field for easy automation:
healthy- store exists and semantic search is activedegraded- store exists but semantic model is unavailable (TF-IDF fallback)error- store missing or doctor could not run required checks
Doctor JSON also includes recommended_action so scripts and users can surface the next best command automatically.
lore doctor --fix applies only safe automatic remediations:
- initialize a missing
.lore/store in the current directory - repair missing or invalid identity metadata
- rebuild the semantic index
- export current AI context files
- install the lore post-commit hook when inside a git repo and no hook exists yet
Use lore doctor --fix-dry-run to preview these actions without modifying the repo.
Automation-friendly JSON is also available on core operational commands:
lore list --json
lore search "auth strategy" --json-compact
lore lint --jsonCI gating examples:
# Fail only on hard errors
lore doctor --json-compact | jq -e '.status != "error"' >/dev/null
# Fail on anything except healthy
lore doctor --json --strictPowerShell (Windows CI):
$report = lore doctor --json-compact | ConvertFrom-Json
if ($report.status -eq "error") { throw "lore doctor failed" }Run a clean, isolated CLI smoke test before publishing:
./smoke.shWhat it does:
- creates a temporary workspace + virtualenv
- installs the current project as a normal package (
pip install .) - runs
lore version,lore init,lore add,lore trust refresh --dry-run, andlore export --format chronicle - exits non-zero on failure
Optional environment variables:
PYTHON_BIN=python3.12 ./smoke.sh
KEEP_SMOKE=1 ./smoke.shUse the GitHub Actions workflow Prepare Release to automate versioning and changelog updates.
What it does:
- bumps
src/lore/__init__.pyversion (patch,minor,major, or explicit version) - generates/updates
CHANGELOG.mdfrom commit subjects since the last tag - commits and pushes the version + changelog update
- creates and pushes a git tag
- creates a GitHub Release with generated release notes
- triggers the existing PyPI publish workflow when the release is published
How to run it:
- Open GitHub Actions for this repo.
- Run
Prepare Release. - Choose
bump(patch/minor/major) or provide an explicitversion.
After it completes, you should see a new tag, updated CHANGELOG.md, and a published release.
By default Lore downloads models from https://huggingface.co. If you're behind a ZScaler proxy or using an internal HuggingFace mirror:
lore config model_endpoint https://artifactory.example.com/artifactory/api/huggingfaceml/huggingface
lore config model_ssl_verify false # only if SSL inspection breaks certificate validationRun lore doctor to confirm the model downloads and loads from your endpoint.
| Command | Args / Flags | What it does |
|---|---|---|
onboard |
Guided setup — concepts, store, security, first spell, export | |
init |
[path] |
Create a .lore/ store in a directory |
add |
[category] [content] |
Store a spell (interactive if no args) |
list |
[category] [--json] [--json-compact] |
List spells, optionally filtered by tome |
search |
<query> [--top N] [--json] [--json-compact] |
Semantic search across all spells |
remove |
<id> |
Delete a spell |
extract |
[--last N] |
Pull spells from git commit messages |
sync |
[--file PATH] [--dry-run] [--no-export] |
Import shared CHRONICLE.md entries into local .lore/ |
associate-audit |
[--hub-threshold N] |
Audit graph quality: dangling refs, one-way links, hubs, and orphans |
associate-relink |
[--ids CSV] [--top N] [--min-score F] [--apply] |
Recompute related links for all or selected spells (dry-run by default) |
associate-prune |
[--min-score F] [--min-age-days N] [--apply] |
Prune weak stale links using score and age gates (dry-run by default) |
associate-heal |
[--apply] |
Fix one-way links by adding reverse links with same scores (dry-run by default) |
associate-repair |
[--apply] |
Full graph repair: heal one-way links, prune stale, audit and recommend (dry-run by default) |
export |
[--format F] |
Write AI context files (chronicle, agents, copilot, cursor, claude, prompt, windsurf, gemini, cline, aider, all) |
config |
<key> <value> |
Set a config value |
security |
Configure the security preamble for exports | |
doctor |
[--json] [--json-compact] [--strict] [--fix] [--fix-dry-run] [--model-timeout S] |
Store + model health report with optional safe auto-remediation |
lint |
[--fail-on LEVELS] [--json] [--json-compact] |
Check memory quality and optionally emit machine-readable findings |
trust refresh |
[--dry-run] |
Recompute trust scores/levels from git + memory metadata |
trust explain |
<id> [--recompute] |
Show trust signals and scoring reasons for one memory |
hook install |
Install git post-commit hook (wizard) | |
hook uninstall |
Safely remove the lore-managed git hook | |
hook sync-install |
Install git post-merge hook to sync CHRONICLE.md into .lore/ |
|
hook sync-uninstall |
Safely remove the lore-managed post-merge sync hook | |
index rebuild |
Rebuild the semantic search index from scratch | |
version |
[--check] |
Show installed lore version and optionally check PyPI |
update |
[--check-only] [--yes] |
Check for a new version and optionally install it |
awaken |
[--background] [--debounce S] [--sync-chronicle/--no-sync-chronicle] [--associate/--no-associate] [--associate-top N] [--associate-min-score F] |
Watch .lore for export, optionally sync CHRONICLE.md, and optionally auto-link high-confidence related memories |
ui |
Open the interactive terminal browser | |
slumber |
Stop the background daemon | |
relic capture |
[--file F] [--git-diff] [--git-log N] [--clipboard] [--stdin] [--title T] [--tags T] |
Capture a raw artifact |
relic list |
List relics with content preview | |
relic distill |
<id> |
Extract spells from a relic interactively |
relic view |
<id> |
View full relic content |
relic remove |
<id> |
Permanently delete a relic |
Run lore <command> --help for detailed options on any command.
| Package | Purpose |
|---|---|
sentence-transformers |
Local semantic embeddings via all-MiniLM-L6-v2 |
.github/workflows/release.yml— prepares a release, bumps version, updates changelog, tags, and creates GitHub Release.github/workflows/publish.yml— publishes to PyPI onrelease.published|gitpython| Git history extraction | |typer+rich| CLI and terminal output | |textual| Interactive TUI | |watchdog| Live reload in TUI + background daemon | |pyyaml+numpy| YAML storage and vector math |