This is an experiment to contain the code for all components in a single repository, also known as a monorepository.
Please run sh setup-hooks.sh
to enforce correct naming convention for branches.
The code is grouped by language or framework name.
Motivated by a shareable development experience, this repository provides
nix develop .#rust
compatible shell environment containing a rust toolchain and other tools, includingnats
andjust
just
compatible recipes via the Justfile
handily, just
comes via the nix development shell as well.
nix develop .#rust
just
/flake.nix
/flake.lock
/nix/ /* [blueprint](https://github.com/numtide/blueprint) set up underneath here. */
/Cargo.toml
/Cargo.lock
/rust/ # all rust code lives here
/rust/clients/
/rust/services/
/rust/hpos-hal/
/rust/netdiag/
/rust/util_libs/
Reusable Pulumi modules with examples
/pulumi/
The CI system is driven by buildbot-nix.
This repo is configured with treefmt-nix
which can be invoked via:
nix fmt
The repository includes a development container environment for testing the full stack locally. This setup uses systemd-nspawn
containers to simulate a production environment.
- Sudo access (required for container management)
- Nix development environment using
nix develop .#rust
ordirenv allow
The development environment includes:
dev-hub
: NATS Server (and bootstrap server for hosts into Holo system)dev-orch
: Orchestrator servicedev-host
: Holo Host Agentdev-gw
: Gateway service
The container system supports two networking modes with different port forwarding approaches:
- Containers share the host's network namespace
- Direct port access without forwarding
- Recommended for development and testing
- No additional configuration needed
- Containers run in isolated network namespaces
- Requires port forwarding for external access
- Uses socat-based tunneling for reliable connectivity
- Production-ready with proper isolation
Two-Tier Port Forwarding Architecture: The system uses a two-tier socat architecture to handle the fact that Holochain only binds to localhost inside containers:
-
Internal socat forwarder (inside container):
# Inside container: forwards 0.0.0.0:8001 → 127.0.0.1:8000 socat TCP-LISTEN:8001,bind=0.0.0.0,fork,reuseaddr TCP:127.0.0.1:8000
- Bridges the gap between Holochain's localhost-only binding and container network
- Automatically created when
privateNetwork = true
- Service:
socat-internal-admin-forwarder
-
Host-side socat tunnel (on host):
# On host: forwards localhost:8000 → container:8001 socat TCP-LISTEN:8000,fork,reuseaddr TCP:10.0.85.2:8001
- Provides external access from host to container
- Connects to the internal forwarder port (8001)
- Service:
socat-${containerName}-admin
Port Flow:
Host Client → localhost:8000 → Host socat → 10.0.85.2:8001 → Internal socat → 127.0.0.1:8000 → Holochain
Usage Example:
# In your container configuration
privateNetwork = true;
adminWebsocketPort = 8000;
httpGwEnable = true;
httpGwPort = 8080;
This automatically creates a two-tier socat tunnel system:
- Host
localhost:8000
→ Container10.0.85.2:8001
→ Container127.0.0.1:8000
(admin) - Host
localhost:8080
→ Container10.0.85.2:4000
→ Container127.0.0.1:4000
(HTTP gateway)
Deployment Example:
# Production host configuration
holo.host-agent = {
enable = true;
containerPrivateNetwork = true; # Enables automatic socat support
# ... other configuration
};
When host_agent
deploys a Holochain workload, it automatically creates:
- Container with internal socat services
- Host with external socat services
- Complete two-tier port forwarding chain
Verification in Production:
# Check that containers are created with socat services
systemctl list-units | grep socat
# Verify port forwarding is working
nc -z localhost 8000
# Check container networking
machinectl list
The development environment includes the following key packages or use their platform:
-
Core Tools (required for container operation):
coreutils
- Basic Unix utilities for container managementsystemd
- System and service manager for container orchestrationbash
- Shell environment for container interactionpkg-config
- Helper tool for compiling applications and dependencies
-
NATS Stack (required for core messaging infrastructure):
nats-server
- NATS messaging server for inter-service communicationnatscli
- NATS command-line interface for monitoring and managementnsc
- NATS configuration tool for managing NATS security
-
Database:
- MongoDB Atlas URL - Connection to the Holo Org's MongoDB instance
-
Development Tools:
cargo
- Rust package manager for building Rust componentsrustc
- Rust compiler for developmentjust
- Command runner for development workflowsholochain
binaries - Required for running Holochain tests and development
- Start the development containers and follow logs:
just dev-cycle-logs
# ...or use the log compatabile version,
# if you're able to view logs with the command above
just dev-cycle-logs-compat
The development environment now supports testing with different Holochain versions:
# Test with Holochain 0.5 (default - uses kitsune2 networking)
just -- dev-cycle-v05
# Test with Holochain 0.4 (legacy - uses separate bootstrap/signal services)
just -- dev-cycle-v04
# Or specify version manually
just -- dev-cycle "dev-hub dev-host dev-orch dev-gw" "0.4"
This will automatically:
- select the appropriate holonix package (0.3, 0.4, or 0.5)
- configure the correct bootstrap service pattern
- use compatible networking protocols
- In a second terminal, start the Holochain terminal:
just dev-hcterm
- In a third terminal, install the test application:
just dev-install-app
- Switch back to the Holochain terminal and press
r
twice to refresh.
-
Start the development containers and follow logs:
just dev-cycle-logs # ...or use the log compatabile version, # if you're able to view logs with the command above just dev-cycle-logs-compat
This command:
- Creates and starts the dev containers (dev-hub, dev-host, dev-orch, dev-gw)
- Sets up NATS messaging infrastructure
- Initializes the Holochain conductor
- Starts following the logs from all services
Example output:
You should see logs from all services starting up, including NATS server initialization and Holochain conductor startup messages.
[dev-hub] [INFO] Starting NATS server... [dev-hub] [INFO] NATS server started on port 4222 [dev-host] [INFO] Starting Holochain conductor... [dev-host] [INFO] Holochain conductor started [dev-orch] [INFO] Orchestrator service started [dev-gw] [INFO] Gateway service started on port 8080
Common errors:
[ERROR] Failed to start NATS server: port 4222 already in use Solution: Run `just dev-destroy` to clean up existing containers [ERROR] Failed to start Holochain conductor: permission denied Solution: Ensure you have sudo access and run `just dev-destroy` first
-
Install the Humm Hive HApp:
just dev-install-humm-hive
This command:
- Downloads the Humm Hive HApp bundle from the configured URL
- Installs it into the Holochain conductor
- Registers the HApp with the host agent
- Starts the HApp
Example output: You should see messages about the HApp being installed and started successfully.
[INFO] Downloading HApp bundle from https://gist.github.com/steveej/... [INFO] Installing HApp into conductor... [INFO] Registering HApp with host agent... [INFO] Starting HApp... [INFO] HApp started successfully
Common errors:
[ERROR] Failed to download HApp bundle: network error Solution: Check your internet connection and try again [ERROR] HApp already installed Solution: Run `just dev-uninstall-humm-hive` first, then try installing again [ERROR] Failed to register with host agent: NATS connection error Solution: Ensure NATS server is running in dev-hub container
-
Verify the installation:
just dev-ham-find-installed-app
This command:
- Queries the host agent for installed applications
- Filters for the Humm Hive HApp using the workload ID
Example output:
You should see the HApp details including:
{ "installed_app_id": "67d2ef2a67d4b619a54286c4", "status": { "desired": "running", "actual": "running", "payload": {} }, "dna_hash": "uhC0kwENLeSuselWQJtywbYB1QyFK1d-ujmFFtxsq6CYY7_Ohri2u" }
Common errors:
[ERROR] No installed app found with ID: `67d2ef2a67d4b619a54286c4` Solution: Ensure the hApp was installed successfully with `just dev-install-humm-hive` [ERROR] Failed to connect to host agent Solution: Check if dev-host container is running with `just dev-logs`
-
Option a - init without gw: In a new terminal, initialize the Humm Hive HApp:
just dev-ham-init-humm
This command:
- Connects to the Holochain conductor
- Initializes the Humm Hive core zome
- Sets up the initial Hive structure
Example output: You should see a success message indicating the Hive has been initialized.
[INFO] Connecting to Holochain conductor... [INFO] Initializing Humm Hive core zome... [INFO] Hive initialized successfully
Common errors:
[ERROR] Failed to connect to Holochain conductor: connection refused Solution: Ensure the dev containers are running with `just dev-cycle-logs` [ERROR] Hive already initialized Solution: This is not an error - the Hive can only be initialized once
Option b - init with gw
Test the HApp using the HTTP gateway:
bash just dev-gw-curl-humm-hive
This command:
- Makes an HTTP request to the gateway service
- Calls the init
function on the humm_earth_core
zome
- Verifies the HApp is responding
Example output:
You should see a successful response from the HApp's init function.
```
> GET /uhC0kwENLeSuselWQJtywbYB1QyFK1d-ujmFFtxsq6CYY7_Ohri2u/67d2ef2a67d4b619a54286c4/humm_earth_core/init
< HTTP/1.1 200 OK
< Content-Type: application/json
{
"status": "success",
"message": "Hive initialized"
}
```
Common errors:
```
< HTTP/1.1 404 Not Found
Solution: Verify the HApp is installed and running with `just dev-ham-find-installed-app`
< HTTP/1.1 500 Internal Server Error
Solution: Check the gateway logs with `just dev-logs` for more details
```
-
Uninstall the HApp:
just dev-uninstall-humm-hive
This command:
- Stops the HApp
- Unregisters it from the host agent
- Removes it from the Holochain conductor
Example output: You should see confirmation messages about the HApp being stopped and uninstalled.
[INFO] Stopping HApp... [INFO] Unregistering from host agent... [INFO] Removing from Holochain conductor... [INFO] HApp uninstalled successfully
Common errors:
[ERROR] HApp not found Solution: The HApp may already be uninstalled [ERROR] Failed to stop HApp: timeout Solution: Try running `just dev-destroy` to force clean up all containers
The development environment manages workloads through a series of states that represent the lifecycle of a workload.
Here's a description of what each state means and it's expected flow below:
-
Initial States:
reported
: The workload has been registered and stored in mongodb, but is not yet assigned a hostassigned
: The workload has been assigned to a host and has successfully stored this host in mongodbpending
: The workload has been updated in mongodb and is queued for installation on its host(s)updating
: The workload has been updated in mongodb and is queued for updating on its host(s)
-
Installation and Update States:
installed
: The workload hApp has been installed but is not yet runningupdated
: The workload hApp has been successfully updatedrunning
: The workload hApp is installed and actively running
-
Removal States:
deleted
: The workload has been mark as deleted in mongodb and is queued for deletion on its host(s)removed
: The workload<>host links have been removed in mongodbuninstalled
: The workload hApp has been completely uninstalled from all hosts
-
Error States:
error
: An error occurred during state transitionunknown
: The current state cannot be determined
# Initial registration and assignment (eg: just dev-install-humm-hive)
reported (stored in MongoDB) -> assigned (host stored in MongoDB) -> pending (queued/sending update install request via nats)
# Installation process
pending -> installed (hApp installed) -> running (hApp started)
# When updating (eg: just dev-hub-host-agent-remote-hc-humm)
running -> updating (queued/sending update request via nats) -> updated (hApp updated) -> running
# When uninstalling (eg: just dev-uninstall-humm-hive)
running -> deleted (marked in MongoDB) -> removed (links removed from MongoDB) -> uninstalled (hApp removed from hosts)
The status object in the response shows both the desired and actual states:
{
"status": {
"desired": "running", // The target state in MongoDB
"actual": "running", // The current state on the host
"payload": {} // Additional state-specific data (e.g., error messages, update progress)
}
}
If the actual
state differs from the desired
state, it indicates either:
- The workload is in transition between states
- The host is still processing the state change
- An error has occurred during the state transition
The following commands demonstrate the complete flow from initial registration to assignment:
- Register the workload (reported state):
# Register a new workload in MongoDB
just dev-hub-host-agent-remote-hc reported WORKLOAD.add
- Assign the workload (assigned state):
# Assign the workload to a host
just dev-hub-host-agent-remote-hc assigned WORKLOAD.update
- Queue for installation (pending state):
# Queue the workload for installation
just dev-hub-host-agent-remote-hc pending WORKLOAD.insert
OR combine these steps into a single command:
# Register, assign, and queue in one command
just dev-install-humm-hive
-
View the current state in MongoDB
just dev-ham-find-installed-app
-
View the logs for all services
just dev-logs
-
Recreate containers and follow logs:
just dev-cycle-logs
-
Destroy all development containers:
just dev-destroy
CI builds all Nix derivations exposed under the checks
flake output.
While the command is called nix build
, it's also used to execute (i.e. run) various forms of tests.
E.g., this runs the holo-agent integration test defined as NixOS VM test with verbose output:
nix build -vL .#checks.x86_64-linux.holo-agent-integration-nixos
Or this runs the extra-container-holochain
integration tests, which are NixOS VM tests defined in the package file directly:
# Host networking test (recommended)
nix build -vL .#checks.x86_64-linux.pkgs-extra-container-holochain-integration-host-network
# Private networking test (documents port forwarding issues)
nix build -vL .#checks.x86_64-linux.pkgs-extra-container-holochain-integration-private-network
The test environment automatically provides:
- MongoDB for database tests
- NATS server for messaging tests
- Systemd for service management
- Filesystem tools for storage tests
- Network isolation for integration tests
The testing environment includes additional packages and tools:
-
Database:
mongodb-ce
- MongoDB Community Edition (used for running integration tests)
-
Filesystem Tools (for hpos-hal testing):
dosfstools
- Tools for FAT filesystemse2fsprogs
- Tools for ext2/ext3/ext4 filesystems
- Clippy Linting:
cargo fmt && cargo clippy
Runs Rust's linter and clippy to catch common mistakes and enforce style guidelines.
- Holo Agent Integration:
nix build -vL .#checks.x86_64-linux.holo-agent-integration-nixos
Runs a NixOS VM test that:
- Sets up a complete Holo agent environment
- Tests agent initialization
- Verifies agent communication
- Tests workload management
-
Holochain Container Integration:
Host Networking Test (recommended - works reliably):
nix build -vL .#checks.x86_64-linux.pkgs-extra-container-holochain-integration-host-network
Private Networking Test (currently failing due to systemd-nspawn port forwarding compatibility):
nix build -vL .#checks.x86_64-linux.pkgs-extra-container-holochain-integration-private-network
Both tests verify:
- Container creation and initialization
- Holochain conductor configuration
- Service readiness with systemd notifications
- Network connectivity (host vs private networking)
- Environment variable handling for
IS_CONTAINER_ON_PRIVATE_NETWORK
- State persistence (holochain data directory and configuration)
# Run Rust tests
cargo test
# Run integration tests
nix build -vL .#checks.x86_64-linux.holo-agent-integration-nixos
Please see the LICENSE file.
sudo systemctl show holo-host-agent.service | grep Environment | grep CONTAINER
sudo systemctl cat holo-host-agent.service | grep -A10 -B10 CONTAINER