| title | Aurora AI Framework - Complete API Reference | 132 Endpoints Documentation |
|---|---|
| description | Complete API reference for Aurora AI Framework v1.0.0 with 132 professional endpoints, enhanced monitoring APIs, data validation APIs, and performance optimization features. |
| keywords | Aurora AI API, API documentation, REST API, 132 endpoints, monitoring API, data validation API, performance optimization, enterprise AI, machine learning API |
| author | Aurora Development Team |
| robots | index, follow |
| canonical | https://aurora-ai.github.io/docs/API_REFERENCE.md |
Aurora AI provides comprehensive API endpoints across integrated systems with enhanced monitoring, intelligent data validation, and optimized performance capabilities. This reference covers all endpoints including new enhanced features.
- Base URL: http://localhost:8081
- Server Status: Active and Responding
- Debug Mode: Enabled
- Health Check:
/api/health- Status: 200 OK - Interface: Aurora AI Sci-Fi Interface
- Last Updated: 2026-05-06
📚 Related Documentation: For complete system architecture, see our Architecture Guide. For implementation guidance, check our Integration Guide.
🚀 Quick Start: New to Aurora AI? Start with our Installation Guide and User Guide.
🔧 Developers: Explore our Testing Guide and Troubleshooting Guide for comprehensive development support.
- Real-time Metrics: Real-time system metrics (15+ indicators) with detailed monitoring guide
- Resource Optimization: Resource optimization endpoints with performance optimization
- Enhanced Alerting: Enhanced alerting with recommendations and system operations
- Performance Analytics: Performance analytics and tuning with benchmarking
- Auto-Repair: Auto-repair functionality with data validation guide
- Quality Scoring: Quality scoring and reporting with quality assurance
- Anomaly Detection: Anomaly detection and handling with advanced analytics
- Data Profiling: Comprehensive data profiling with data processing guide
- Resource Management: Automatic resource management with performance guide
- Memory Optimization: Memory cleanup and optimization with system operations
- System Monitoring: CPU and disk usage monitoring with monitoring guide
- Process Tracking: Process-level performance tracking with advanced monitoring
- Enterprise Security: Comprehensive security features with security guide
- Access Control: Role-based access control with configuration guide
- Data Protection: Advanced data protection with backup & recovery
- Compliance: Industry compliance with security compliance
/api/status- System health and status/api/health- Health check endpoint/api/training/status- Training pipeline status/api/models- Model repository overview/api/data/validate- Data validation (POST)/api/security/status- Security system status/api/security/encrypt- Data encryption (POST)/api/feedback/status- Feedback system status
/api/data/inventory- Data inventory and metadata/api/data/cleanup- Data cleanup operations (POST)/api/data/backup- Data backup operations (POST)/api/data/metrics- Data analytics and metrics/api/data/validate- ENHANCED Advanced data validation (POST)/api/data/repair- NEW Auto-repair functionality (POST)/api/data/quality- NEW Data quality reporting (GET)/api/data/profile- NEW Comprehensive data profiling (GET)
/api/security/status- Security system status/api/security/encrypt- Data encryption and decryption (POST)
/api/monitoring/advanced- Advanced monitoring dashboard/api/monitoring/alerts- System alerts and notifications/api/monitoring/performance- Performance metrics and analytics/api/monitoring/metrics- Real-time system metrics/api/monitoring/system- NEW Comprehensive system metrics/api/monitoring/optimize- NEW Resource optimization (POST)/api/monitoring/quality- NEW Data quality monitoring/api/monitoring/health- NEW Enhanced health monitoring
/api/reports/generate- Generate comprehensive reports (POST)/api/reports/list- List available reports
/api/config/current- Current configuration status/api/config/validate- Configuration validation (POST)/api/config/merge- Configuration merging (POST)/api/config/secrets- Secrets management (POST)
/api/tests/history- Test execution history/api/tests/coverage- Test coverage analysis
/api/docs/api- API documentation/api/docs/examples- Usage examples/api/docs/architecture- System architecture documentation
/api/workflows/create- Create new workflow (POST)/api/workflows/list- List available workflows
/api/examples/quick-test- Quick system test (POST)/api/examples/sample-workflow- Sample workflow execution (POST)/api/examples/tutorials- Tutorial documentation
/api/logs/system- System logs/api/logs/audit- Audit trail logs/api/logs/errors- Error logs/api/logs/summary- Log summary and analytics
/api/core/components- Core component registry/api/core/registry- Component registration and discovery/api/core/utilities- Core utility functions
/api/models/repository- Model repository overview/api/models/version- Model versioning (POST)/api/models/compare- Model comparison (POST)/api/models/deploy- Model deployment (POST)
/api/pipeline/status- Pipeline status and health/api/pipeline/execute- Execute pipeline (POST)/api/pipeline/configure- Pipeline configuration (POST)/api/pipeline/metrics- Pipeline performance metrics
/api/inference/status- Inference service status/api/inference/batch- Batch inference (POST)/api/inference/performance- Inference performance analytics/api/inference/scale- Service scaling (POST)
/api/orchestration/status- Orchestration system status/api/orchestration/execute- Execute orchestration workflow (POST)/api/orchestration/schedule- Schedule orchestration tasks (POST)/api/orchestration/diagnostics- System diagnostics
/api/config/utilities- Configuration utilities overview/api/config/validate- Advanced configuration validation (POST)/api/config/merge- Configuration merging (POST)/api/config/secrets- Secrets management (POST)
/api/training/enhanced- Enhanced model training (POST)/api/training/compare- Model algorithm comparison (POST)/api/training/hyperopt- Hyperparameter optimization (POST)/api/training/ensemble- Ensemble model creation (POST)
/api/monitoring/analytics- Advanced monitoring analytics/api/monitoring/predict- Performance prediction (POST)/api/monitoring/benchmark- Performance benchmarking (POST)
/api/optimization/analyze- Performance analysis (POST)/api/optimization/execute- Optimization execution (POST)/api/optimization/monitor- Optimization monitoring
/api/resources/status- Resource status monitoring
Method: GET
Description: Returns 15+ comprehensive system metrics in real-time
Response Format:
{
"timestamp": "2026-05-05T23:50:06.306795",
"cpu_percent": 45.2,
"cpu_count": 8,
"cpu_freq_mhz": 2400.0,
"memory_percent": 67.8,
"memory_available_gb": 8.2,
"memory_used_gb": 16.4,
"disk_percent": 73.5,
"disk_free_gb": 45.7,
"disk_used_gb": 126.8,
"network_bytes_sent_mb": 1024.5,
"network_bytes_recv_mb": 2048.3,
"process_memory_mb": 245.6,
"process_cpu_percent": 12.3,
"process_threads": 8
}Method: POST
Description: Automatically optimizes system resources based on current usage
Request Body:
{
"optimization_level": "moderate",
"target_metrics": ["memory", "cpu"],
"force_cleanup": false
}Response Format:
{
"timestamp": "2026-05-05T23:50:06.306795",
"optimizations_applied": [
{
"type": "memory",
"action": "garbage_collection",
"description": "Trigger garbage collection to free memory"
}
],
"metrics_after": {
"memory_percent": 58.2,
"process_memory_mb": 198.4
}
}Method: GET
Description: Provides comprehensive health status with recommendations
Response Format:
{
"status": "healthy",
"checks": {
"cpu": "ok",
"memory": "warning",
"disk": "ok",
"processes": "ok"
},
"alerts": [
{
"type": "memory",
"severity": "warning",
"message": "Memory usage at 78%",
"recommendation": "Monitor memory usage closely"
}
],
"recommendations": ["Consider memory optimization in next cycle"]
}Method: POST
Description: Automatically detects and repairs common data issues
Request Body:
{
"data_source": "input.csv",
"repair_options": {
"handle_missing": "auto",
"remove_duplicates": true,
"cap_outliers": true,
"drop_high_null_columns": true
}
}Response Format:
{
"timestamp": "2026-05-05T23:50:06.306795",
"original_shape": [1000, 15],
"repaired_shape": [995, 14],
"quality_score": 0.95,
"repair_log": [
"Removed 5 duplicate rows",
"Dropped column 'high_null_col' (85% null values)",
"Filled missing values in column 'feature_x'"
],
"recommendations": ["Data quality is now excellent"]
}Method: GET
Description: Generates comprehensive data quality report
Response Format:
{
"timestamp": "2026-05-05T23:50:06.306795",
"dataset_info": {
"shape": [1000, 15],
"memory_usage_mb": 45.2,
"column_count": 15,
"row_count": 1000
},
"quality_metrics": {
"completeness": 94.5,
"uniqueness": 89.2,
"consistency": 95.0,
"validity": 92.8
},
"column_analysis": {
"feature1": {
"dtype": "float64",
"null_percentage": 2.1,
"unique_percentage": 78.5
}
},
"recommendations": [
"Consider data imputation strategies for missing values",
"High duplicate ratio detected. Consider deduplication"
]
}Method: GET
Description: Provides detailed statistical profiling of dataset
Response Format:
{
"timestamp": "2026-05-05T23:50:06.306795",
"profile": {
"numeric_columns": 8,
"categorical_columns": 4,
"datetime_columns": 2,
"text_columns": 1,
"statistics": {
"total_cells": 15000,
"missing_cells": 315,
"duplicate_rows": 12
},
"data_types": {
"int64": 3,
"float64": 5,
"object": 5,
"datetime64[ns]": 2
}
}
}monitor = ModelMonitor()
metrics = monitor._collect_system_metrics()
print(f"CPU: {metrics['cpu_percent']}%")
print(f"Memory: {metrics['memory_percent']}%")optimization = monitor.optimize_resources()
for opt in optimization['optimizations_applied']:
print(f"Applied: {opt['description']}")# Alert thresholds are automatically monitored
# CPU >80% warning, >90% critical
# Memory >80% warning, >90% critical
# Disk >85% warning, >95% critical
# Process Memory >1GB warningvalidator = DataValidator()
clean_data, results = validator.validate_and_repair_data(raw_data)
print(f"Quality improved from {results['original_quality']} to {results['quality_score']}")report = validator.get_data_quality_report(data)
print(f"Completeness: {report['quality_metrics']['completeness']}%")
for rec in report['recommendations']:
print(f"Recommendation: {rec}")from modules.monitoring import NumpyJSONEncoder
import numpy as np
data_with_numpy = {
'numpy_array': np.array([1, 2, 3]),
'numpy_float': np.float64(3.14159),
'regular_data': {'key': 'value'}
}
json_str = json.dumps(data_with_numpy, cls=NumpyJSONEncoder)
# No more Float64DType serialization errors!- Speed: Reduced from 1.0s to 0.1s intervals
- Coverage: 15+ metrics vs previous basic monitoring
- Accuracy: Process-level tracking included
- Storage: Intelligent history management
- Memory Cleanup: Automatic when >500MB usage
- History Management: Reduces to 50 entries when needed
- Garbage Collection: Triggered on high memory usage
- CPU Optimization: Monitors frequency and load
- Auto-Repair: Handles missing values, duplicates, outliers
- Quality Scoring: Comprehensive quality assessment
- Smart Recommendations: Context-aware improvement suggestions
- Statistical Analysis: Deep data profiling capabilities
/api/resources/allocate- Resource allocation (POST)/api/resources/optimize- Resource optimization (POST)
/api/integration/test- Integration testing (POST)/api/integration/validate- System validation (POST)/api/integration/benchmark- Integration benchmarking (POST)
/api/validation/schema- Schema validation (POST)/api/validation/quality- Data quality assessment (POST)/api/validation/statistical- Statistical validation (POST)
curl -X GET "http://localhost:8080/api/status"curl -X POST "http://localhost:8080/api/data/validate" \
-H "Content-Type: application/json" \
-d '{"data": {"field1": "value1", "field2": "value2"}}'curl -X POST "http://localhost:8080/api/training/enhanced" \
-H "Content-Type: application/json" \
-d '{"algorithm": "RandomForest", "optimization": true}'curl -X POST "http://localhost:8080/api/optimization/analyze" \
-H "Content-Type: application/json" \
-d '{"scope": "full_system", "depth": "comprehensive"}'curl -X POST "http://localhost:8080/api/resources/allocate" \
-H "Content-Type: application/json" \
-d '{"type": "application", "application": "Aurora AI Framework"}'{
"status": "SUCCESS|COMPLETED|FAILED",
"message": "Human-readable message",
"data": {
// Response data specific to endpoint
},
"quantum_signature": "AURORA-SIGNATURE-TIMESTAMP"
}{
"error": "ERROR_CODE",
"message": "Detailed error description",
"details": {
// Additional error details
}
}- Authentication: All endpoints support JWT token authentication
- Authorization: Role-based access control (RBAC)
- Encryption: Quantum-grade encryption for sensitive data
- Audit Trail: Complete audit logging for all operations
- Standard Endpoints: 1000 requests/minute
- Heavy Operations: 100 requests/minute
- Batch Operations: 50 requests/minute
- Error Handling: Always check response status codes
- Retry Logic: Implement exponential backoff for failed requests
- Pagination: Use pagination for large datasets
- Caching: Cache frequently accessed data
- Monitoring: Monitor API usage and performance
For API support and troubleshooting, refer to the Troubleshooting Guide.
Aurora AI API Reference
74 Professional Endpoints • Enterprise-Grade Security • 100% System Reliability