Skip to content

Commit 06933d8

Browse files
authoredFeb 25, 2025··
Gemini informational agent + improved prompt (#30)
* feat: improve informational agent prompt * feat: gemini model informational agent
1 parent 3424854 commit 06933d8

File tree

6 files changed

+13
-11
lines changed

6 files changed

+13
-11
lines changed
 

‎.github/workflows/dev.yml

+2
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,8 @@ jobs:
1515
env:
1616
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
1717
OPENAI_MODEL: ${{ vars.OPENAI_MODEL }}
18+
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
19+
GOOGLE_AI_MODEL: ${{ vars.GOOGLE_AI_MODEL }}
1820
steps:
1921
- name: Checkout Code
2022
uses: actions/checkout@v4

‎.github/workflows/main.yml

+2
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,8 @@ jobs:
1515
env:
1616
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
1717
OPENAI_MODEL: ${{ vars.OPENAI_MODEL }}
18+
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
19+
GOOGLE_AI_MODEL: ${{ vars.GOOGLE_AI_MODEL }}
1820
steps:
1921
- name: Checkout Code
2022
uses: actions/checkout@v4

‎src/agents/informational_agent/informational_agent.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
try:
2-
from ..llm_factory import OpenAILLMs
2+
from ..llm_factory import OpenAILLMs, GoogleAILLMs
33
from .informational_prompts import \
44
informational_role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
55
from ..utils.types import InvokeAgentResponseType
66
except ImportError:
7-
from src.agents.llm_factory import OpenAILLMs
7+
from src.agents.llm_factory import OpenAILLMs, GoogleAILLMs
88
from src.agents.informational_agent.informational_prompts import \
99
informational_role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
1010
from src.agents.utils.types import InvokeAgentResponseType
@@ -37,7 +37,7 @@ class State(TypedDict):
3737

3838
class InformationalAgent:
3939
def __init__(self):
40-
llm = OpenAILLMs(temperature=0.25)
40+
llm = GoogleAILLMs()
4141
self.llm = llm.get_llm()
4242
summarisation_llm = OpenAILLMs()
4343
self.summarisation_llm = summarisation_llm.get_llm()
@@ -135,7 +135,6 @@ def summarize_conversation(self, state: State, config: RunnableConfig) -> dict:
135135
delete_messages: list[AllMessageTypes] = [RemoveMessage(id=m.id) for m in state["messages"][:-3]]
136136

137137
return {"summary": summary_response.content, "conversationalStyle": conversationalStyle_response.content, "messages": delete_messages}
138-
# return {"summary": summary_response.content, "messages": delete_messages}
139138

140139
def should_summarize(self, state: State) -> str:
141140
"""

‎src/agents/informational_agent/informational_prompts.py

+3-1
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,10 @@
3131
Follow up with guided hints or clarifications based on their response.
3232
3333
## Flexibility:
34-
Adjust your approach dynamically, whether the student seeks detailed guidance, prefers a hands-off approach, or demonstrates unique problem-solving strategies. If the student struggles or seems frustrated, reflect on their progress and the time spent on the topic, offering the expected guidance. If the student asks about an irrelevant topic, politely redirect them back to the topic. Do not end your responses with a concluding statement."""
34+
Restrict your response's length to quickly resolve the student's query. However, adjust your approach dynamically, if the student seeks detailed guidance, prefers a hands-off approach, or demonstrates unique problem-solving strategies. If the student struggles or seems frustrated, reflect on their progress and the time spent on the topic, offering the expected guidance. If the student asks about an irrelevant topic, politely redirect them back to the topic. Do not end your responses with a concluding statement.
3535
36+
## Governance
37+
You are a chatbot deployed in Lambda Feedback, an online self-study platform. You are discussing with students from Imperial College London."""
3638

3739
pref_guidelines = """**Guidelines:**
3840
- Use concise, objective language.

‎src/agents/utils/example_inputs/example_input_4.json

+1-4
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,7 @@
22
"message": "hi",
33
"params": {
44
"include_test_data": true,
5-
"conversation_history": [
6-
{ "type": "user", "content": "hi" },
7-
{ "type": "ai", "content": "Hello! How can I help you today?" }
8-
],
5+
"conversation_history": [{ "type": "user", "content": "hi" }],
96
"summary": "",
107
"conversational_style": "",
118
"question_response_details": {

‎src/agents/utils/testbench_agents.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@
3030
STEP 2: Extract the parameters from the JSON
3131
"""
3232
# NOTE: #### This is the testing message!! #####
33-
message = "Hi"
33+
message = "What do you know about me?"
3434
# NOTE: ########################################
3535

3636
# replace "mock" in the message and conversation history with the actual message
@@ -57,7 +57,7 @@
5757
question_information,
5858
question_access_information
5959
)
60-
print("Question Response Details Prompt:", question_response_details_prompt, "\n\n")
60+
# print("Question Response Details Prompt:", question_response_details_prompt, "\n\n")
6161

6262
if "agent_type" in params:
6363
agent_type = params["agent_type"]

0 commit comments

Comments
 (0)
Please sign in to comment.