Kaia is an intelligent Kubernetes Operations Assistant built with pydantic-ai that provides safe and expert-level Kubernetes cluster management through natural language interactions.
(If you're looking for the previous version of Kaia based on AutoGen 0.4 and function calls - go to autogen branch)
Kaia serves as your AI-powered DevOps companion, offering:
- Multi-Provider AI Support: Choose from Ollama (local), Google Gemini, or GitHub Models
- Safe Kubernetes Operations: All kubectl commands executed through secure MCP (Model Context Protocol) server
- Expert Guidance: Comprehensive knowledge of Kubernetes best practices and troubleshooting
- Interactive CLI: Natural language interface for cluster management tasks
- Containerized Security: kubectl operations run in isolated Docker containers with read-only kubeconfig mounting
- Ollama: Run local models (llama2, mistral, etc.) for privacy and offline use
- Google Gemini: Advanced reasoning with latest Gemini models (gemini-2.0-flash, etc.)
- GitHub Models: Access to various models through GitHub's AI platform
- MCP-based kubectl execution prevents direct command injection
- Progressive change validation and impact assessment
- Namespace-aware operations with explicit targeting
- Resource validation before destructive operations
- Comprehensive error handling and retry logic
- Cluster health monitoring and diagnostics
- Workload deployment, scaling, and management
- Troubleshooting pods, services, deployments, and resources
- Security configuration validation
- Performance optimization recommendations
- Conversational command interface
- Context-aware responses with technical explanations
- Alternative solution suggestions
- Educational explanations of Kubernetes concepts
- Python 3.8+
- Docker (installed and running)
- Kubernetes cluster access (kubeconfig in
~/.kube/config
) - AI Provider Setup (see Provider Setup section)
# Clone the repository
git clone https://github.com/otomato-gh/kaia.git
# Change into the project directory
cd kaia
# Install dependencies
pip install -r requirements.txt
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama2
# Start Ollama server
ollama serve
# Optional: Set custom model
export OLLAMA_MODEL_NAME=mistral
# Get API key from https://aistudio.google.com
export GEMINI_API_KEY=your_api_key_here
# Optional: Set custom model
export GEMINI_MODEL_NAME=gemini-2.0-flash
# Get GitHub personal access token
export GITHUB_TOKEN=your_github_token_here
# Optional: Set custom model
export GITHUB_MODEL_NAME=gpt-4o
# Use default provider (Ollama)
python kaia.py
# Specify provider
python kaia.py --provider gemini
python kaia.py --provider github
python kaia.py --provider ollama
# Get help
python kaia.py --help
Enter your Kubernetes request: show all pods in kube-system namespace
Enter your Kubernetes request: scale deployment nginx to 3 replicas
Enter your Kubernetes request: check why pod myapp-123 is failing
Enter your Kubernetes request: create a service for deployment webapp on port 80
Enter your Kubernetes request: Thanks!
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββββββ
β Kaia (CLI) ββββββ AI Provider β β Docker Container β
β β β (Ollama/Gemini/ β β β
β - Natural Lang β β GitHub Models) β β k8s-mcp-server β
β - Validation β β β β - kubectl β
β - Safety β β - Query Processingβ β - istioctl β
β - Retries β β - Response Gen β β - helm β
βββββββββββββββββββ ββββββββββββββββββββ β - argocd β
β β βββββββββββββββββββββββ
β β β
ββββββββββββββββββββββββββΌβββββββββββββββββββββββββ
β
βββββββΌβββββββ
β Kubernetes β
β Cluster β
ββββββββββββββ
- Read-First Policy: Always gather information before making changes
- Single Command Execution: One kubectl operation per interaction
- Impact Assessment: Explains potential effects before execution
- Progressive Changes: Encourages incremental modifications
- Error Recovery: Intelligent retry with corrected commands
- Resource Validation: Verifies names, namespaces, and parameters
kaia/
βββ kaia.py # Main application
βββ tests.py # Comprehensive test suite
βββ run_tests.py # Test runner script
βββ pytest.ini # Pytest configuration
βββ requirements.txt # Python dependencies
βββ README.md # This file
βββ kaia.png # Logo
# Install test dependencies (included in requirements.txt)
pip install -r requirements.txt
# Run all tests
python run_tests.py
# Run tests with verbose output
python run_tests.py --verbose
# Run tests with coverage report
python run_tests.py --coverage
# Run specific test categories
python run_tests.py --unit # Unit tests only
python run_tests.py --integration # Integration tests only
# Or use pytest directly
pytest tests.py -v
pytest tests.py --cov=kaia --cov-report=html
The test suite covers:
- β Argument Parsing: Command line argument validation and defaults
- β Environment Setup: Provider-specific environment variable configuration
- β Model Creation: All three providers (Ollama, Gemini, GitHub)
- β Error Handling: Invalid providers, missing tokens, malformed inputs
- β Integration: End-to-end provider setup and model initialization
- β Validator Function: Kubernetes response validation and retry logic
- β MCP Server Configuration: Docker container setup and volume mounting
- Fork the repository
- Create a feature branch
- Make your changes
- Test with different providers
- Submit a pull request
MCP Server Connection: Ensure Docker is running and the k8s-mcp-server image is available
docker pull ghcr.io/alexei-led/k8s-mcp-server:latest
Provider Authentication: Verify environment variables are set correctly
# Check current environment
env | grep -E "(OLLAMA|GEMINI|GITHUB)"
Kubeconfig Access: Ensure kubectl works locally
kubectl cluster-info
This project is open source. See LICENSE file for details.
- Built with pydantic-ai
- Kubernetes integration via k8s-mcp-server
- Inspired by the need for safe, intelligent Kubernetes operations