Thank you for your interest in contributing to LibreFang! "Libre" means freedom, and we mean it — this project is built by its community.
Our promise: if your contribution positively helps the project, we merge it as-is. If it needs improvement, we provide active, constructive review to help you get it merged. Every contributor matters.
Active contributors are invited to join the LibreFang GitHub org — core participants who consistently contribute get commit access and a voice in project direction.
This guide covers everything you need to get started, from setting up your development environment to submitting pull requests.
- Ways to Contribute
- Development Environment
- Building and Testing
- Code Style
- Architecture Overview
- How to Add a New Agent Template
- How to Add a New Skill
- How to Add a New Channel Adapter
- How to Add a New LLM Provider
- How to Add a New Tool
- How to Write Integration Tests
- Pull Request Process
- Code of Conduct
You don't need to know Rust to contribute to LibreFang. We have contribution paths for every skill level:
| What | Skills Needed | Time | Where |
|---|---|---|---|
| Write an agent template | TOML + prompt engineering | 1-2 hours | agents/ |
| Write a skill (Python) | Python | 2-4 hours | ~/.librefang/skills/ |
| Write a skill (JavaScript) | Node.js | 2-4 hours | ~/.librefang/skills/ |
| Fix typos / improve docs | Markdown | 30 min | docs/ |
| Translate docs | Markdown + language | 1-2 hours | docs/i18n/ |
| Report bugs with reproduction steps | Testing | 30 min | GitHub Issues |
| Test on uncommon platforms | Testing | 1 hour | GitHub Issues |
| What | Skills Needed | Time | Where |
|---|---|---|---|
| Add a channel adapter | Rust + platform API | half day | crates/librefang-channels/ |
| Add an LLM provider driver | Rust + provider API | half day | crates/librefang-runtime/ |
| Add a built-in tool | Rust | 2-4 hours | crates/librefang-runtime/ |
| Write/improve tests | Rust | 1-2 hours | any crate |
| What | Skills Needed | Time | Where |
|---|---|---|---|
| Kernel features | Deep Rust + architecture | 1+ days | crates/librefang-kernel/ |
| Security hardening | Rust + security | 1+ days | multiple crates |
| Performance optimization | Rust + profiling | varies | any crate |
| WASM sandbox improvements | Rust + Wasmtime | 1+ days | crates/librefang-runtime/ |
| What | Skills Needed | Time | Where |
|---|---|---|---|
| Desktop app features | Rust + Tauri + TypeScript | varies | crates/librefang-desktop/ |
| JavaScript SDK | TypeScript | varies | sdk/javascript/ |
| Python SDK | Python | varies | sdk/python/ |
| WhatsApp gateway | Node.js | varies | packages/whatsapp-gateway/ |
Tip: Look for issues labeled
good first issue— they include the files to modify, how to test, and estimated difficulty.
I want to add an agent template (no Rust):
cp -r agents/hello-world agents/my-agent
# Edit agents/my-agent/agent.toml
# Submit a PRI want to write a Python skill (no Rust):
mkdir -p ~/.librefang/skills/my-skill
# See docs/skill-development.md for the skill formatI want to fix a bug or add a Rust feature:
git clone https://github.com/librefang/librefang.git && cd librefang
cargo build --workspace # Build
cargo test --workspace # Test
cargo clippy --workspace --all-targets -- -D warnings # LintClick the green "Code" button on GitHub → "Codespaces" → "Create codespace on main". The DevContainer will automatically install Rust, Python, Node.js, and build the project. You'll have a fully working environment in your browser within a few minutes.
- Rust 1.75+ (install via rustup)
- Git
- Python 3.8+ (optional, for Python runtime and skills)
- A supported LLM API key (Anthropic, OpenAI, Groq, etc.) for end-to-end testing
git clone https://github.com/librefang/librefang.git
cd librefang
cargo buildThe first build takes a few minutes because it compiles SQLite (bundled) and Wasmtime. Subsequent builds are incremental.
For running integration tests that hit a real LLM, set at least one provider key:
export GROQ_API_KEY=gsk_... # Recommended for fast, free-tier testing
export ANTHROPIC_API_KEY=sk-ant-... # For Anthropic-specific testsTests that require a real LLM key will skip gracefully if the env var is absent.
cargo build --workspacecargo test --workspaceThe test suite is currently 2,100+ tests. All must pass before merging.
cargo test -p librefang-kernel
cargo test -p librefang-runtime
cargo test -p librefang-memorycargo clippy --workspace --all-targets -- -D warningsThe CI pipeline enforces zero clippy warnings.
cargo fmt --allAlways run cargo fmt before committing. CI will reject unformatted code.
After building, verify your local setup:
cargo run -- doctor- Formatting: Use
rustfmtwith default settings. Runcargo fmt --allbefore every commit. - Linting:
cargo clippy --workspace -- -D warningsmust pass with zero warnings. - Documentation: All public types and functions must have doc comments (
///). - Error Handling: Use
thiserrorfor error types. Avoidunwrap()in library code; prefer?propagation. - Naming:
- Types:
PascalCase(e.g.,LibreFangKernel,AgentManifest) - Functions/methods:
snake_case - Constants:
SCREAMING_SNAKE_CASE - Crate names:
librefang-{name}(kebab-case)
- Types:
- Dependencies: Workspace dependencies are declared in the root
Cargo.toml. Prefer reusing workspace deps over adding new ones. If you need a new dependency, justify it in the PR. - Testing: Every new feature must include tests. Use
tempfile::TempDirfor filesystem isolation and random port binding for network tests. - Serde: All config structs use
#[serde(default)]for forward compatibility with partial TOML.
LibreFang is organized as a Cargo workspace with 14 crates:
| Crate | Role |
|---|---|
librefang-types |
Shared type definitions, taint tracking, manifest signing (Ed25519), model catalog, MCP/A2A config types |
librefang-memory |
SQLite-backed memory substrate with vector embeddings, usage tracking, canonical sessions, JSONL mirroring |
librefang-runtime |
Agent loop, 3 LLM drivers (Anthropic/Gemini/OpenAI-compat), 53 built-in tools, WASM sandbox, MCP client/server, A2A protocol |
librefang-hands |
Hands system (curated autonomous capability packages), 7 bundled hands |
librefang-extensions |
Integration registry (25 bundled MCP templates), AES-256-GCM credential vault, OAuth2 PKCE |
librefang-kernel |
Assembles all subsystems: workflow engine, RBAC auth, heartbeat monitor, cron scheduler, config hot-reload |
librefang-api |
REST/WS/SSE API (Axum 0.8), 76 endpoints, 14-page SPA dashboard, OpenAI-compatible /v1/chat/completions |
librefang-channels |
40 channel adapters (Telegram, Discord, Slack, WhatsApp, and 36 more), formatter, rate limiter |
librefang-wire |
OFP (LibreFang Protocol): TCP P2P networking with HMAC-SHA256 mutual authentication |
librefang-cli |
Clap CLI with daemon auto-detect (HTTP mode vs. in-process fallback), MCP server |
librefang-migrate |
Migration engine for importing from OpenClaw (and future frameworks) |
librefang-skills |
Skill system: 60 bundled skills, FangHub marketplace, OpenClaw compatibility, prompt injection scanning |
librefang-desktop |
Tauri 2.0 native desktop app (WebView + system tray + single-instance + notifications) |
xtask |
Build automation tasks |
KernelHandletrait: Defined inlibrefang-runtime, implemented onLibreFangKernelinlibrefang-kernel. This avoids circular crate dependencies while enabling inter-agent tools.- Shared memory: A fixed UUID (
AgentId(Uuid::from_bytes([0..0, 0x01]))) provides a cross-agent KV namespace. - Daemon detection: The CLI checks
~/.librefang/daemon.jsonand pings the health endpoint. If a daemon is running, commands use HTTP; otherwise, they boot an in-process kernel. - Capability-based security: Every agent operation is checked against the agent's granted capabilities before execution.
Agent templates live in the agents/ directory. Each template is a folder containing an agent.toml manifest.
- Create a new directory under
agents/:
agents/my-agent/agent.toml
- Write the manifest:
name = "my-agent"
version = "0.1.0"
description = "A brief description of what this agent does."
author = "librefang"
module = "builtin:chat"
tags = ["category"]
[model]
provider = "groq"
model = "llama-3.3-70b-versatile"
[resources]
max_llm_tokens_per_hour = 100000
[capabilities]
tools = ["file_read", "file_list", "web_fetch"]
memory_read = ["*"]
memory_write = ["self.*"]
agent_spawn = false- Include a system prompt if needed by adding it to the
[model]section:
[model]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
system_prompt = """
You are a specialized agent that...
"""- Test by spawning:
librefang agent spawn agents/my-agent/agent.toml- Submit a PR with the new template.
Skills are reusable capabilities that agents can invoke. They can be written in Python, JavaScript, or as pure prompt templates — no Rust required.
| Type | Language | Description |
|---|---|---|
prompt |
None (TOML only) | A prompt template with variables |
python |
Python 3.8+ | A Python script with run() entry point |
javascript |
Node.js 18+ | A JS module with run() export |
- Create a skill directory:
my-skill/
skill.toml
main.py
- Write the manifest (
skill.toml):
name = "my-skill"
version = "0.1.0"
description = "What this skill does."
author = "your-name"
runtime = "python"
entry = "main.py"
tags = ["utility"]
[input]
url = { type = "string", description = "URL to process", required = true }- Write the implementation (
main.py):
def run(input: dict) -> str:
url = input["url"]
# Your logic here
return f"Processed: {url}"- Test locally:
librefang skill test ./my-skill --input '{"url": "https://example.com"}'- Submit as a PR to
skills/community/or publish to FangHub.
For skills that are just prompt engineering, no code is needed:
name = "summarize-email"
version = "0.1.0"
description = "Summarize an email thread."
runtime = "promptonly"
tags = ["email", "productivity"]
[input]
thread = { type = "string", description = "The email thread text", required = true }
[prompt]
template = """
Summarize the following email thread in 3 bullet points:
{{thread}}
"""Channel adapters live in crates/librefang-channels/src/. Each adapter implements the ChannelAdapter trait.
-
Create a new file:
crates/librefang-channels/src/myplatform.rs -
Implement the
ChannelAdaptertrait (defined intypes.rs):
use crate::types::{ChannelAdapter, ChannelMessage, ChannelType};
use async_trait::async_trait;
pub struct MyPlatformAdapter {
// token, client, config fields
}
#[async_trait]
impl ChannelAdapter for MyPlatformAdapter {
fn channel_type(&self) -> ChannelType {
ChannelType::Custom("myplatform".to_string())
}
async fn start(&mut self) -> Result<(), Box<dyn std::error::Error>> {
// Start polling/listening for messages
Ok(())
}
async fn send(&self, channel_id: &str, content: &str) -> Result<(), Box<dyn std::error::Error>> {
// Send a message back to the platform
Ok(())
}
async fn stop(&mut self) {
// Clean shutdown
}
}- Register the module in
crates/librefang-channels/src/lib.rs:
pub mod myplatform;-
Wire it up in the channel bridge (
crates/librefang-api/src/channel_bridge.rs) so the daemon starts it alongside other adapters. -
Add configuration support in
librefang-typesconfig structs (add a[channels.myplatform]section). -
Add CLI setup wizard instructions in
crates/librefang-cli/src/main.rsundercmd_channel_setup. -
Write tests and submit a PR.
LLM provider drivers live in crates/librefang-runtime/src/. LibreFang uses three driver families that cover most providers:
| Driver | Covers |
|---|---|
openai_compat |
Any OpenAI-compatible API (Groq, Together, Mistral, local Ollama, etc.) |
anthropic |
Anthropic Claude models |
gemini |
Google Gemini models |
Most new providers don't need a new driver — just add an entry to the model catalog in crates/librefang-types/src/models.rs:
- Add the provider constant and its base URL.
- Add model entries with context window sizes and pricing.
- Add aliases if desired (e.g.,
"fast" -> "groq/llama-3.3-70b"). - Write a test verifying the model resolves correctly.
- Create
crates/librefang-runtime/src/my_provider.rs. - Implement the
LlmDrivertrait (seeanthropic.rsfor reference). - Register it in the driver factory in
crates/librefang-runtime/src/llm_driver.rs. - Add config types in
crates/librefang-types/src/config.rs. - Write integration tests (they should skip gracefully if the API key env var is absent).
Built-in tools are defined in crates/librefang-runtime/src/tool_runner.rs.
- Add the tool implementation function:
async fn tool_my_tool(input: &serde_json::Value) -> Result<String, String> {
let param = input["param"]
.as_str()
.ok_or("Missing 'param' field")?;
// Tool logic here
Ok(format!("Result: {param}"))
}- Register it in the
execute_toolmatch block:
"my_tool" => tool_my_tool(input).await,- Add the tool definition to
builtin_tool_definitions():
ToolDefinition {
name: "my_tool".to_string(),
description: "Description shown to the LLM.".to_string(),
input_schema: serde_json::json!({
"type": "object",
"properties": {
"param": {
"type": "string",
"description": "The parameter description"
}
},
"required": ["param"]
}),
},- Agents that need the tool must list it in their manifest:
[capabilities]
tools = ["my_tool"]-
Write tests for the tool function.
-
If the tool requires kernel access (e.g., inter-agent communication), accept
Option<&Arc<dyn KernelHandle>>and handle theNonecase gracefully.
LibreFang has 2,100+ tests covering all crates. Every new feature must include tests. This section explains where tests live, how to structure them, and how to run them.
Tests in LibreFang are inline — they live alongside the source code in #[cfg(test)] modules at the bottom of each .rs file:
crates/librefang-kernel/src/metering.rs # contains #[cfg(test)] mod tests { ... }
crates/librefang-memory/src/substrate.rs # contains #[cfg(test)] mod tests { ... }
crates/librefang-runtime/src/retry.rs # contains #[cfg(test)] mod tests { ... }
This is the standard Rust convention and keeps tests close to the code they verify.
- Test module:
#[cfg(test)] mod tests { ... }at the bottom of the file. - Test functions:
test_<what_is_being_tested>insnake_case.- Good:
test_record_and_check_quota_under,test_substrate_kv,test_retry_config_defaults - Avoid:
test1,it_works,my_test
- Good:
Follow the setup / action / assertion pattern:
- Setup — create the dependencies your code needs (in-memory databases, config structs, etc.).
- Action — call the function or method under test.
- Assertion — verify the result with
assert!,assert_eq!, or pattern matching.
Many crates provide helpers for setup. For example, MemorySubstrate::open_in_memory(0.1) creates an in-memory SQLite database, and MeteringEngine tests use a shared setup() function.
All tests in the workspace:
cargo test --workspaceTests for a specific crate:
cargo test -p librefang-kernel
cargo test -p librefang-memory
cargo test -p librefang-runtimeA single test by name:
cargo test -p librefang-kernel test_record_and_check_quota_underShow output from passing tests (useful for debugging):
cargo test -p librefang-memory -- --nocapture#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_my_feature() {
// Setup
let config = MyConfig::default();
// Action
let result = config.validate();
// Assertion
assert!(result.is_ok());
}
}#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_my_async_feature() {
// Setup
let substrate = MemorySubstrate::open_in_memory(0.1).unwrap();
let agent_id = AgentId::new();
// Action
substrate
.set(agent_id, "key", serde_json::json!("value"))
.await
.unwrap();
let val = substrate.get(agent_id, "key").await.unwrap();
// Assertion
assert_eq!(val, Some(serde_json::json!("value")));
}
}- Use
#[tokio::test]for any test that calls.await. Most crates in LibreFang already depend ontokiowith thetest-utilfeature. - Use in-memory databases for isolation.
MemorySubstrate::open_in_memory(0.1)avoids touching the real filesystem. - Use
tempfile::TempDirwhen you need a real directory (e.g., skill loading, file I/O tests). The directory is automatically cleaned up when theTempDirvalue is dropped. - Use
Default::default()to construct config structs with sensible defaults, then override only the fields relevant to your test. - Skip tests that need external services by checking for environment variables:
#[tokio::test] async fn test_llm_integration() { let api_key = match std::env::var("GROQ_API_KEY") { Ok(k) => k, Err(_) => { eprintln!("Skipping: GROQ_API_KEY not set"); return; } }; // ... test with real API }
- Extract a
setup()helper when multiple tests in the same module need the same boilerplate (seecrates/librefang-kernel/src/metering.rsfor an example). - Test error cases too — verify that invalid input returns the expected error, not just that the happy path works.
-
Fork and branch: Create a feature branch from
main. Use descriptive names likefeat/add-matrix-adapterorfix/session-restore-crash. -
Make your changes: Follow the code style guidelines above.
-
Test thoroughly:
cargo test --workspacemust pass (all 2,100+ tests).cargo clippy --workspace --all-targets -- -D warningsmust produce zero warnings.cargo fmt --all --checkmust produce no diff.
-
Write a clear PR description: Explain what changed and why. Include before/after examples if applicable.
-
One concern per PR: Keep PRs focused. A single PR should address one feature, one bug fix, or one refactor -- not all three.
-
Review process: At least one maintainer must approve before merge. Maintainers give an initial response within 7 days. If your PR needs changes, we provide specific, actionable suggestions — we don't leave you guessing. Contributor attribution is always preserved. See
GOVERNANCE.mdfor full project policy. -
CI must pass: All automated checks must be green before merge.
Use clear, imperative-mood messages:
Add Matrix channel adapter with E2EE support
Fix session restore crash on kernel reboot
Refactor capability manager to use DashMap
This project follows the local CODE_OF_CONDUCT.md. By participating, you agree to uphold a welcoming, inclusive, and harassment-free environment for everyone.
Please report unacceptable behavior to the maintainers.
- Ask in GitHub Discussions for questions or ideas.
- Open a GitHub Issue for bugs or feature requests.
- Check the docs/ directory for detailed guides on specific topics.
- Read GOVERNANCE.md for decision-making, maintainer expectations, and attribution rules.