This directory contains scripts and tools for setting up a local development environment for the Nebari Operator.
dev/
├── Makefile # Main automation interface
├── README.md # This file
├── scripts/ # Development automation scripts
│ ├── cluster/ # Cluster lifecycle management
│ │ ├── create.sh # Create Kind cluster with MetalLB
│ │ └── delete.sh # Delete Kind cluster
│ ├── services/ # Service installation
│ │ ├── install.sh # Install Envoy Gateway, cert-manager, Keycloak, Gateway
│ │ ├── uninstall.sh # Uninstall all services
│ │ └── keycloak/ # Keycloak OIDC authentication
│ │ ├── setup.sh # Configure Keycloak realm and users
│ │ └── README.md # Keycloak documentation
│ ├── networking/ # Network configuration
│ │ ├── update-hosts.sh # Manage /etc/hosts entries for NebariApps
│ │ └── port-forward.sh # Setup port forwarding for local access
│ └── testing/ # Testing utilities
│ └── test-connectivity.sh # Test HTTP/HTTPS connectivity to apps
└── examples/ # Example manifests for local development
├── app-deployment.yaml # Test application deployment (nginx)
└── nebariapp.yaml # Simple NebariApp example with TLS and routing
# Create cluster and install all services
make setup
# Check status
make status
# Teardown everything
make teardownmake help # Show all available commands
make cluster-create # Create Kind cluster with MetalLB
make services-install # Install Envoy Gateway, cert-manager, etc.
make setup # Full setup (cluster + services)
make teardown # Full cleanup
make status # Check environment status
make update-hosts # Update /etc/hosts with all NebariApp hostnames
make test-connectivity # Test HTTP/HTTPS connectivity to an app
# Usage: make test-connectivity APP=<name> NS=<namespace>
make port-forward # Setup port forwarding for local access- Name:
nebari-operator-dev(configurable viaCLUSTER_NAME) - 1 control-plane node + 2 worker nodes
- MetalLB for LoadBalancer services
- Port forwarding: 80, 443
- Installed via Helm
- Namespace:
envoy-gateway-system - Provides Gateway API implementation
- GatewayClass:
envoy-gateway
- Installed via Helm with Gateway API support
- Namespace:
cert-manager - Self-signed CA for development
- Automatic certificate management
- Gateway:
nebari-gatewayinenvoy-gateway-system- HTTP listener on port 80
- HTTPS listener on port 443
- Allows routes from all namespaces
- Wildcard Certificate:
*.nebari.local- Secret:
nebari-gateway-tls - Self-signed for development
- Secret:
Installed automatically by scripts/services/install.sh. To configure the Nebari realm and users:
./scripts/services/keycloak/setup.sh- Namespace:
keycloak - Admin credentials:
admin/admin - Realm:
nebari - Realm admin:
admin/nebari-admin - See keycloak README for details
Note: The
examples/directory contains simplified manifests for quick local development. For comprehensive test variations (HTTP-only, multiple paths, TLS disabled, etc.), see the E2E test files intest/e2e/which create these programmatically.
-
Setup environment:
cd dev make setup -
Deploy operator:
cd .. make deploy -
Deploy test application and NebariApp:
# Deploy the test app (nginx) kubectl apply -f examples/app-deployment.yaml # Deploy the NebariApp to expose it via the Gateway kubectl apply -f examples/nebariapp.yaml # Wait for the app to be ready kubectl wait --for=condition=Ready nebariapp/sample-app -n dev-test --timeout=60s # Update /etc/hosts for local access make update-hosts
-
Test connectivity:
# Test the app curl -k https://sample-app.nebari.local # Or use the test script make test-connectivity APP=sample-app NS=dev-test
The e2e tests can use this pre-configured environment:
cd dev
make setup
cd ..
make test-e2eOr let the tests manage everything:
# Tests will create cluster if needed
make test-e2eCLUSTER_NAME: Name of the Kind cluster (default:nebari-operator-dev)KUBECONFIG: Path to kubeconfig file (default:~/.kube/config)
The setup automatically configures /etc/hosts to resolve *.nebari.local domains to the Gateway's LoadBalancer IP.
The services-install script automatically:
- Gets the Gateway's LoadBalancer IP from MetalLB
- Adds a base entry:
<GATEWAY_IP> nebari.local # nebari-gateway
After creating NebariApp resources, add their hostnames to /etc/hosts:
# Scan and add all NebariApp hostnames automatically
./scripts/networking/update-hosts.sh
# Or add specific app hostname
./scripts/networking/update-hosts.sh sample-appThis adds entries like:
172.18.255.200 sample-app.nebari.local # nebari-gateway
If needed, you can manually add entries:
# Get Gateway IP
GATEWAY_IP=$(kubectl get svc -n envoy-gateway-system \
-l gateway.envoyproxy.io/owning-gateway-name=nebari-gateway \
-o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
# Add entry
echo "${GATEWAY_IP} my-app.nebari.local # nebari-gateway" | sudo tee -a /etc/hostsOnce DNS is configured, test your apps:
# HTTP (redirects to HTTPS if TLS enabled)
curl http://sample-app.nebari.local
# HTTPS (use -k for self-signed cert)
curl -k https://sample-app.nebari.local
# View headers and follow redirects
curl -v -L -k https://sample-app.nebari.local
# Test from browser
# Open: https://sample-app.nebari.local
# (Accept the self-signed certificate warning)Automated Testing
Use the test-connectivity script to check if an app is reachable:
# Test the sample app from examples/
make test-connectivity APP=sample-app NS=dev-test
# Or use the script directly
./scripts/testing/test-connectivity.sh sample-app dev-testThis will:
- Check if the NebariApp exists and is ready
- Test HTTP and HTTPS connectivity
- Show curl commands for manual testing
- Check if hostname is in /etc/hosts
To view the Gateway's external IP:
kubectl get svc -n envoy-gateway-system \
-l gateway.envoyproxy.io/owning-gateway-name=nebari-gatewaymake statuskubectl get gateway nebari-gateway -n envoy-gateway-system -o yaml
kubectl get gatewayclass envoy-gateway -o yamlkubectl get certificate -n envoy-gateway-system
kubectl describe certificate nebari-gateway-cert -n envoy-gateway-systemkubectl logs -n envoy-gateway-system deployment/envoy-gatewaymake teardown
make setupThis development setup differs from production in these ways:
- Certificates: Uses self-signed CA instead of Let's Encrypt
- LoadBalancer: Uses MetalLB instead of cloud provider LB
- DNS: Uses
/etc/hostsinstead of real DNS - Scale: Single replica deployments instead of HA setup
For production setup with ArgoCD, see the main documentation.