|
| 1 | + |
| 2 | +# LangGraph RAG Workflow with Elasticsearch |
| 3 | + |
| 4 | +This project contains the code to create a custom agent using the LangGraph Retrieval Agent Template with Elasticsearch to build an efficient Retrieval-Augmented Generation (RAG) workflow for AI-driven responses. |
| 5 | + |
| 6 | + |
| 7 | +## Introduction |
| 8 | + |
| 9 | +LangGraph, developed by LangChain, simplifies the creation of retrieval-based question-answering systems. By using LangGraph Studio and LangGraph CLI, you can quickly build agents that index and retrieve documents using Elasticsearch. |
| 10 | + |
| 11 | +## Prerequisites |
| 12 | + |
| 13 | +Before you start, ensure you have the following installed: |
| 14 | + |
| 15 | +- Elasticsearch (Cloud or on-prem, version 8.0.0 or higher) |
| 16 | +- Python 3.9+ |
| 17 | +- Access to an LLM provider like Cohere, OpenAI, or Anthropic |
| 18 | + |
| 19 | +## Steps to Set Up the LangGraph App |
| 20 | + |
| 21 | +### 1. Install LangGraph CLI |
| 22 | + |
| 23 | +```bash |
| 24 | +pip install --upgrade "langgraph-cli[inmem]" |
| 25 | +``` |
| 26 | +### 2. Create LangGraph App |
| 27 | +``` |
| 28 | +mkdir lg-agent-demo |
| 29 | +cd lg-agent-demo |
| 30 | +langgraph new lg-agent-demo |
| 31 | +``` |
| 32 | +### 3. Install Dependencies |
| 33 | +Create a virtual environment and install the dependencies: |
| 34 | + |
| 35 | +For macOS: |
| 36 | +``` |
| 37 | +python3 -m venv lg-demo |
| 38 | +source lg-demo/bin/activate |
| 39 | +pip install -e . |
| 40 | +``` |
| 41 | +For Windows: |
| 42 | +``` |
| 43 | +python3 -m venv lg-demo |
| 44 | +lg-demo\Scripts\activate |
| 45 | +pip install -e . |
| 46 | +``` |
| 47 | +### 4. Set Up Environment |
| 48 | +Create a .env file by copying the example: |
| 49 | + |
| 50 | +``` |
| 51 | +cp .env.example .env |
| 52 | +``` |
| 53 | +Then, configure your .env file with your API keys and URLs for Elasticsearch and LLM. |
| 54 | + |
| 55 | +### 5. Update configuration.py |
| 56 | +Modify the configuration.py file to set up your LLM models, like Cohere (or OpenAI/Anthropic), as shown below: |
| 57 | + |
| 58 | + |
| 59 | +```embedding_model = "cohere/embed-english-v3.0" |
| 60 | +response_model = "cohere/command-r-08-2024" |
| 61 | +query_model = "cohere/command-r-08-2024" |
| 62 | +``` |
| 63 | + |
| 64 | +## Running the Agent |
| 65 | + |
| 66 | +### 1. Launch LangGraph Server |
| 67 | +``` |
| 68 | +cd lg-agent-demo |
| 69 | +langgraph dev |
| 70 | +``` |
| 71 | +This starts the LangGraph API server locally. |
| 72 | + |
| 73 | +### 2. Open LangGraph Studio |
| 74 | +You can now access the LangGraph Studio UI and see the following: |
| 75 | +<img width="1306" alt="Screenshot 2025-04-01 at 6 02 31 PM" src="https://github.com/user-attachments/assets/c7c13645-99a1-48b2-8d3c-c1135fd33f54" /> |
| 76 | +Indexer Graph: Indexes documents into Elasticsearch. |
| 77 | + |
| 78 | +<img width="776" alt="Screenshot 2025-03-11 at 6 08 09 PM" src="https://github.com/user-attachments/assets/5d61b9d0-ae9e-4d66-9e99-fa27bce7a1d0" /> |
| 79 | + |
| 80 | + |
| 81 | +Retrieval Graph: Retrieves data from Elasticsearch and answers queries using the LLM. |
| 82 | + |
| 83 | +### 3. Index Sample Documents |
| 84 | +Index the sample documents into Elasticsearch (representing the NoveTech Solutions reports). |
| 85 | + |
| 86 | +### 4. Run the Retrieval Graph |
| 87 | +Enter a query like: |
| 88 | + |
| 89 | +``` |
| 90 | +What was NovaTech Solutions' total revenue in Q1 2025? |
| 91 | +The system will retrieve relevant documents and provide an answer. |
| 92 | +``` |
| 93 | +## Customizing the Retrieval Agent |
| 94 | +## Query Prediction |
| 95 | +To enhance user experience, add a query prediction feature based on the context from previous queries and retrieved documents. Here’s what to do: |
| 96 | + |
| 97 | +1. Add predict_query function in graph.py. |
| 98 | + |
| 99 | +2. Modify the respond function to return a response object. |
| 100 | + |
| 101 | +3. Update the graph structure to include a new node for query prediction. |
| 102 | + |
| 103 | +4. Modify Prompts and Configuration |
| 104 | +Update prompts.py to define a prompt for predicting the next question. Then, modify configuration.py to add this new prompt. |
| 105 | + |
| 106 | +``` |
| 107 | +predict_next_question_prompt: str = "Your prompt here" |
| 108 | +``` |
| 109 | +Re-run the Retrieval Graph |
| 110 | +Run the query again to see the predicted next three questions based on the context. |
| 111 | +<img width="732" alt="Screenshot 2025-03-17 at 3 06 54 PM" src="https://github.com/user-attachments/assets/88832fa6-4dc9-41cc-894d-d3d437bf4d80" /> |
| 112 | + |
| 113 | +## Conclusion |
| 114 | +By using the LangGraph Retrieval Agent template with Elasticsearch, you can: |
| 115 | + |
| 116 | +- Accelerate development by using pre-configured templates. |
| 117 | + |
| 118 | +- Easily deploy with built-in API support and scaling. |
| 119 | + |
| 120 | +- Customize workflows to fit your specific use case. |
0 commit comments