This repository contains the code needed to run a modular chat functionality that provides multiple chatbot API endpoints [written in Python].
This chapter helps you to quickly set up a new Python chat module function using this repository.
Note
To develop this function further, you will require the following environment variables in your .env
file:
> If you use azure-openai:
AZURE_OPENAI_API_KEY
AZURE_OPENAI_ENDPOINT
AZURE_OPENAI_API_VERSION
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME
AZURE_OPENAI_EMBEDDING_3072_DEPLOYMENT
AZURE_OPENAI_EMBEDDING_1536_DEPLOYMENT
AZURE_OPENAI_EMBEDDING_3072_MODEL
AZURE_OPENAI_EMBEDDING_1536_MODEL
> If you use openai:
OPENAI_API_KEY
OPENAI_MODEL
> For monitoring of the LLM calls (follow instructions on how to set up on langsmith):
LANGCHAIN_TRACING_V2
LANGCHAIN_ENDPOINT
LANGCHAIN_API_KEY
LANGCHAIN_PROJECT
Clone this repository to your local machine using the following command:
git clone https://github.com/lambda-feedback/lambda-chat
You're ready to start developing your chat function. Head over to the Development section to learn more.
In the README.md
file, change the title and description so it fits the purpose of your chat function.
Also, don't forget to update or delete the Quickstart chapter from the README.md
file after you've completed these steps.
You can create your own invocation to your own agents hosted anywhere. Copy the base_agent
from src/agents/
and edit it to match your LLM agent requirements. Import the new invocation in the module.py
file.
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the src/agents/llm_factory.py
.
.github/workflows/
dev.yml # deploys the DEV function to Lambda Feedback
main.yml # deploys the STAGING function to Lambda Feedback
test-report.yml # gathers Pytest Report of function tests
src/module.py # chat_module function implementation
src/module_test.py # chat_module function tests
src/agents/ # find all agents developed for the chat functionality
src/agents/utils/test_prompts.py # allows testing of any LLM agent on a couple of example inputs containing Lambda Feedback Questions and synthetic student conversations
You can run the Python function itself. Make sure to have a main function in either src/module.py
or index.py
.
python src/module.py
You can also use the testbench_agents.py
script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
python src/agents/utils/testbench_agents.py
To build the Docker image, run the following command:
docker build -t llm_chat .
To run the Docker image, use the following command:
docker run -e OPENAI_API_KEY={your key} -e OPENAI_MODEL={your LLM chosen model name} -p 8080:8080 llm_chat
docker run --env-file .env -it --name my-lambda-container -p 8080:8080 llm_chat
This will start the chat function and expose it on port 8080
and it will be open to be curl:
curl --location 'http://localhost:8080/2015-03-31/functions/function/invocations' --header 'Content-Type: application/json' --data '{"message":"hi","params":{"conversation_id":"12345Test","conversation_history": [{"type":"user","content":"hi"}]}}'
POST URL:
http://localhost:8080/2015-03-31/functions/function/invocations
Body:
{
"message":"hi",
"params":{
"conversation_id":"12345Test",
"conversation_history": [{"type":"user","content":"hi"}]
}
}
Body with optional Params:
{
"message":"hi",
"params":{
"conversation_id":"12345Test",
"conversation_history":[{"type":"user","content":"hi"}],
"summary":" ",
"conversational_style":" ",
"question_response_details": "",
"include_test_data": true,
"agent_type": {agent_name}
}
}
Deploying the chat function to Lambda Feedback is simple and straightforward, as long as the repository is within the Lambda Feedback organization.
After configuring the repository, a GitHub Actions workflow will automatically build and deploy the chat function to Lambda Feedback as soon as changes are pushed to the main branch of the repository. For development, the GitHub Actions Dev workflow also deploys a dev version of the function onto AWS.
If your chat function is working fine when run locally, but not when containerized, there is much more to consider. Here are some common issues and solution approaches:
Run-time dependencies
Make sure that all run-time dependencies are installed in the Docker image.
- Python packages: Make sure to add the dependency to the
requirements.txt
orpyproject.toml
file, and runpip install -r requirements.txt
orpoetry install
in the Dockerfile. - System packages: If you need to install system packages, add the installation command to the Dockerfile.
- ML models: If your chat function depends on ML models, make sure to include them in the Docker image.
- Data files: If your chat function depends on data files, make sure to include them in the Docker image.