A Multi-Agent Retrieval Augmented Generation (RAG) system built with LangChain and LangGraph.
This repository implements a graph-based multi-agent RAG system with the following components:
- Document Processing Pipeline: Handles document ingestion, chunking, and indexing
- Agent-based Graph System: Utilizes LangGraph for orchestrating multiple agents
- Streamlit Web Interface: Provides an interactive UI for querying the system
- Integration with LLMs: Uses various large language models to power the agents
- Python 3.9+
- Git
- Access to LLM APIs (Fireworks, OpenAI, etc.)
- Sufficient disk space for document storage
- Clone the repository:
git clone https://github.com/yourusername/langchain-qs.git
cd langchain-qs
- Use the Makefile to set up your environment and install dependencies:
# Create virtual environment and install dependencies
make install
# Create .env file with placeholder values
make env
- Configure your
.env
file with your API keys:
TAVILY_API_KEY=
FIREWORKS_API_KEY=
API_PUBLIC_KEY=
API_PRIVATE_KEY=
GROUP_ID=
# Add other API keys as needed
Start the Streamlit app:
make all
The application will be available at http://localhost:8501
by default.
You can also run the application using Docker:
# Build Docker image
make docker-build
# Run Docker container
make docker-run
- Open the application in your web browser
- Select your preferred LLM model from the sidebar
- Enter your query in the input field
- The system will:
- Process your query
- Retrieve relevant information
- Generate a response using the Multi-Agent framework
- Multiple LLM Support: Choose between different language models
- Graph-based Agent Workflow: Agents collaborate to improve response quality
- Document Retrieval: Automatically fetches relevant information from indexed documents
- Query Refinement: Rewrite queries to improve search results when needed
- One-Click Deployments: Deploy easily to various cloud platforms
- GitHub Integration: Seamless version control and collaboration support
- Expanded LLM Providers: Support for newer models from various providers
- Enhanced Caching: Improved response times through optimized caching
For convenience, the following make commands are available:
make help # Show all available commands
Contributions are welcome! Please feel free to submit a Pull Request.