In this final exercise, you will learn how to interact with your AI Agent via an API call. This enables you to programmatically send queries to your AI Agent and retrieve responses. A pre-configured Python script (agent.py
) is provided in the GitHub repository to simplify this process.
Before proceeding, ensure you have:
-
Python 3.7 or later installed.
-
(Optional, but highly recommended) A virtual environment like conda or venv.
-
The
agent.py
script, available in the repository underkubernetes-walkthrough/agent.py
. -
Installed the required Python libraries:
pyjwt
,openai
, andhttpx
.pip install pyjwt openai httpx
-
Your AI Agent's credentials:
- Agent Key: Found in your chatbot embed code under
data-chatbot-id
. - Agent Endpoint URL: Found in your chatbot embed code (src value in the embed script).
- Agent Key: Found in your chatbot embed code under
- Open the
agent.py
file in a text editor. This Python file is located in thekubernetes-walkthrough
directory in this repo. - Replace the placeholders with your agent's details:
agent_key
: Replace<your-agent-endpoint-key>
with your Agent Key.agent_endpoint
: Replace<your-agent-endpoint-url>
with your Agent Endpoint URL. Ensure it ends with /api/v1/.
- Run the script:
python agent.py
-
The script will send a query to your Agent. The default query is:
How do I troubleshoot a CrashLoopBackOff error in Kubernetes?
-
The agent's response will be printed to the terminal.
The API Endpoint that is deployed is OpenAI API compatible. Most LLMs these days are OpenAI compatible and these Agents aren't any different.
- Modify the messages parameter in the script to include your desired query:
messages=[{"role": "user", "content": "Your custom query here"}]
- Adjust the response handling as needed.
Let's wrap up here...