Run any open-source LLMs, such as Llama, Mistral, as OpenAI compatible API endpoint in the cloud.
-
Updated
Jan 20, 2025 - Python
Run any open-source LLMs, such as Llama, Mistral, as OpenAI compatible API endpoint in the cloud.
A new DSL and server for AI agents and multi-step tasks
RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry
AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation & Optimization with AutoML-Style Automation
AIConfig is a config-based framework to build generative AI applications.
The collaborative spreadsheet for AI. Chain cells into powerful pipelines, experiment with prompts and models, and evaluate LLM responses in real-time. Work together seamlessly to build and iterate on AI applications.
Python SDK for running evaluations on LLM generated responses
An end-to-end LLM reference implementation providing a Q&A interface for Airflow and Astronomer
cluster/scheduler health monitoring for GPU jobs on k8s
Friendli: the fastest serving engine for generative AI
Miscellaneous codes and writings for MLOps
The prompt engineering, prompt management, and prompt evaluation tool for TypeScript, JavaScript, and NodeJS.
Deployment of RAG + LLM model serving on multiple K8s cloud clusters
Lightweight Agent Framework for building AI apps with any LLM
An AI Agent IaC tool that aims to make developing and deploying AI Agents easier.
Add a description, image, and links to the llm-ops topic page so that developers can more easily learn about it.
To associate your repository with the llm-ops topic, visit your repo's landing page and select "manage topics."