Skip to content

Commit dc657d2

Browse files
Rename session manager classes to message history (#323)
This PR renames `StandardSessionManager` and `SemanticSessionManager` to `MessageHistory` and `SemanticMessageHistory`, respectively. This is done to improve clarity on what these classes are for. Importing `StandardSessionManager` and `SemanticSessionManager` are now deprecated.
1 parent dca2326 commit dc657d2

17 files changed

+1164
-1041
lines changed

README.md

+8-8
Original file line numberDiff line numberDiff line change
@@ -264,20 +264,20 @@ print(response[0]["response"])
264264
265265
> Learn more about [semantic caching]((https://docs.redisvl.com/en/stable/user_guide/03_llmcache.html)) for LLMs.
266266
267-
### LLM Session Management
267+
### LLM Memory History
268268
269-
Improve personalization and accuracy of LLM responses by providing user chat history as context. Manage access to the session data using recency or relevancy, *powered by vector search* with the [`SemanticSessionManager`](https://docs.redisvl.com/en/stable/api/session_manager.html).
269+
Improve personalization and accuracy of LLM responses by providing user message history as context. Manage access to message history data using recency or relevancy, *powered by vector search* with the [`MessageHistory`](https://docs.redisvl.com/en/stable/api/message_history.html).
270270
271271
```python
272-
from redisvl.extensions.session_manager import SemanticSessionManager
272+
from redisvl.extensions.message_history import SemanticMessageHistory
273273
274-
session = SemanticSessionManager(
274+
history = SemanticMessageHistory(
275275
name="my-session",
276276
redis_url="redis://localhost:6379",
277277
distance_threshold=0.7
278278
)
279279
280-
session.add_messages([
280+
history.add_messages([
281281
{"role": "user", "content": "hello, how are you?"},
282282
{"role": "assistant", "content": "I'm doing fine, thanks."},
283283
{"role": "user", "content": "what is the weather going to be today?"},
@@ -286,19 +286,19 @@ session.add_messages([
286286
```
287287
Get recent chat history:
288288
```python
289-
session.get_recent(top_k=1)
289+
history.get_recent(top_k=1)
290290
```
291291
```stdout
292292
>>> [{"role": "assistant", "content": "I don't know"}]
293293
```
294294
Get relevant chat history (powered by vector search):
295295
```python
296-
session.get_relevant("weather", top_k=1)
296+
history.get_relevant("weather", top_k=1)
297297
```
298298
```stdout
299299
>>> [{"role": "user", "content": "what is the weather going to be today?"}]
300300
```
301-
> Learn more about [LLM session management]((https://docs.redisvl.com/en/stable/user_guide/07_session_manager.html)).
301+
> Learn more about [LLM message history]((https://docs.redisvl.com/en/stable/user_guide/07_message_history.html)).
302302
303303
304304
### LLM Semantic Routing

docs/api/index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ filter
2020
vectorizer
2121
reranker
2222
cache
23-
session_manager
23+
message_history
2424
router
2525
threshold_optimizer
2626
```

docs/user_guide/07_session_manager.ipynb renamed to docs/user_guide/07_message_history.ipynb

+31-30
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"# LLM Session Memory"
7+
"# LLM Message History"
88
]
99
},
1010
{
@@ -15,7 +15,7 @@
1515
"\n",
1616
"The solution to this problem is to append the previous conversation history to each subsequent call to the LLM.\n",
1717
"\n",
18-
"This notebook will show how to use Redis to structure and store and retrieve this conversational session memory."
18+
"This notebook will show how to use Redis to structure and store and retrieve this conversational message history."
1919
]
2020
},
2121
{
@@ -32,8 +32,8 @@
3232
}
3333
],
3434
"source": [
35-
"from redisvl.extensions.session_manager import StandardSessionManager\n",
36-
"chat_session = StandardSessionManager(name='student tutor')"
35+
"from redisvl.extensions.message_history import MessageHistory\n",
36+
"chat_history = MessageHistory(name='student tutor')"
3737
]
3838
},
3939
{
@@ -52,8 +52,8 @@
5252
"metadata": {},
5353
"outputs": [],
5454
"source": [
55-
"chat_session.add_message({\"role\":\"system\", \"content\":\"You are a helpful geography tutor, giving simple and short answers to questions about Europen countries.\"})\n",
56-
"chat_session.add_messages([\n",
55+
"chat_history.add_message({\"role\":\"system\", \"content\":\"You are a helpful geography tutor, giving simple and short answers to questions about European countries.\"})\n",
56+
"chat_history.add_messages([\n",
5757
" {\"role\":\"user\", \"content\":\"What is the capital of France?\"},\n",
5858
" {\"role\":\"llm\", \"content\":\"The capital is Paris.\"},\n",
5959
" {\"role\":\"user\", \"content\":\"And what is the capital of Spain?\"},\n",
@@ -88,7 +88,7 @@
8888
}
8989
],
9090
"source": [
91-
"context = chat_session.get_recent()\n",
91+
"context = chat_history.get_recent()\n",
9292
"for message in context:\n",
9393
" print(message)"
9494
]
@@ -97,7 +97,7 @@
9797
"cell_type": "markdown",
9898
"metadata": {},
9999
"source": [
100-
"In many LLM flows the conversation progresses in a series of prompt and response pairs. session managers offer a convienience function `store()` to add these simply."
100+
"In many LLM flows the conversation progresses in a series of prompt and response pairs. Message history offer a convenience function `store()` to add these simply."
101101
]
102102
},
103103
{
@@ -121,9 +121,9 @@
121121
"source": [
122122
"prompt = \"what is the size of England compared to Portugal?\"\n",
123123
"response = \"England is larger in land area than Portal by about 15000 square miles.\"\n",
124-
"chat_session.store(prompt, response)\n",
124+
"chat_history.store(prompt, response)\n",
125125
"\n",
126-
"context = chat_session.get_recent(top_k=6)\n",
126+
"context = chat_history.get_recent(top_k=6)\n",
127127
"for message in context:\n",
128128
" print(message)"
129129
]
@@ -160,33 +160,33 @@
160160
}
161161
],
162162
"source": [
163-
"chat_session.add_message({\"role\":\"system\", \"content\":\"You are a helpful algebra tutor, giving simple answers to math problems.\"}, session_tag='student two')\n",
164-
"chat_session.add_messages([\n",
163+
"chat_history.add_message({\"role\":\"system\", \"content\":\"You are a helpful algebra tutor, giving simple answers to math problems.\"}, session_tag='student two')\n",
164+
"chat_history.add_messages([\n",
165165
" {\"role\":\"user\", \"content\":\"What is the value of x in the equation 2x + 3 = 7?\"},\n",
166166
" {\"role\":\"llm\", \"content\":\"The value of x is 2.\"},\n",
167167
" {\"role\":\"user\", \"content\":\"What is the value of y in the equation 3y - 5 = 7?\"},\n",
168168
" {\"role\":\"llm\", \"content\":\"The value of y is 4.\"}],\n",
169169
" session_tag='student two'\n",
170170
" )\n",
171171
"\n",
172-
"for math_message in chat_session.get_recent(session_tag='student two'):\n",
172+
"for math_message in chat_history.get_recent(session_tag='student two'):\n",
173173
" print(math_message)"
174174
]
175175
},
176176
{
177177
"cell_type": "markdown",
178178
"metadata": {},
179179
"source": [
180-
"## Semantic conversation memory\n",
180+
"## Semantic message history\n",
181181
"For longer conversations our list of messages keeps growing. Since LLMs are stateless we have to continue to pass this conversation history on each subsequent call to ensure the LLM has the correct context.\n",
182182
"\n",
183183
"A typical flow looks like this:\n",
184184
"```\n",
185185
"while True:\n",
186186
" prompt = input('enter your next question')\n",
187-
" context = chat_session.get_recent()\n",
187+
" context = chat_history.get_recent()\n",
188188
" response = LLM_api_call(prompt=prompt, context=context)\n",
189-
" chat_session.store(prompt, response)\n",
189+
" chat_history.store(prompt, response)\n",
190190
"```\n",
191191
"\n",
192192
"This works, but as context keeps growing so too does our LLM token count, which increases latency and cost.\n",
@@ -195,12 +195,12 @@
195195
"\n",
196196
"A better solution is to pass only the relevant conversational context on each subsequent call.\n",
197197
"\n",
198-
"For this, RedisVL has the `SemanticSessionManager`, which uses vector similarity search to return only semantically relevant sections of the conversation."
198+
"For this, RedisVL has the `SemanticMessageHistory`, which uses vector similarity search to return only semantically relevant sections of the conversation."
199199
]
200200
},
201201
{
202202
"cell_type": "code",
203-
"execution_count": 6,
203+
"execution_count": 7,
204204
"metadata": {},
205205
"outputs": [
206206
{
@@ -212,10 +212,10 @@
212212
}
213213
],
214214
"source": [
215-
"from redisvl.extensions.session_manager import SemanticSessionManager\n",
216-
"semantic_session = SemanticSessionManager(name='tutor')\n",
215+
"from redisvl.extensions.message_history import SemanticMessageHistory\n",
216+
"semantic_history = SemanticMessageHistory(name='tutor')\n",
217217
"\n",
218-
"semantic_session.add_messages(chat_session.get_recent(top_k=8))"
218+
"semantic_history.add_messages(chat_history.get_recent(top_k=8))"
219219
]
220220
},
221221
{
@@ -234,8 +234,8 @@
234234
],
235235
"source": [
236236
"prompt = \"what have I learned about the size of England?\"\n",
237-
"semantic_session.set_distance_threshold(0.35)\n",
238-
"context = semantic_session.get_relevant(prompt)\n",
237+
"semantic_history.set_distance_threshold(0.35)\n",
238+
"context = semantic_history.get_relevant(prompt)\n",
239239
"for message in context:\n",
240240
" print(message)"
241241
]
@@ -266,9 +266,9 @@
266266
}
267267
],
268268
"source": [
269-
"semantic_session.set_distance_threshold(0.7)\n",
269+
"semantic_history.set_distance_threshold(0.7)\n",
270270
"\n",
271-
"larger_context = semantic_session.get_relevant(prompt)\n",
271+
"larger_context = semantic_history.get_relevant(prompt)\n",
272272
"for message in larger_context:\n",
273273
" print(message)"
274274
]
@@ -300,17 +300,17 @@
300300
}
301301
],
302302
"source": [
303-
"semantic_session.store(\n",
303+
"semantic_history.store(\n",
304304
" prompt=\"what is the smallest country in Europe?\",\n",
305305
" response=\"Monaco is the smallest country in Europe at 0.78 square miles.\" # Incorrect. Vatican City is the smallest country in Europe\n",
306306
" )\n",
307307
"\n",
308308
"# get the key of the incorrect message\n",
309-
"context = semantic_session.get_recent(top_k=1, raw=True)\n",
309+
"context = semantic_history.get_recent(top_k=1, raw=True)\n",
310310
"bad_key = context[0]['entry_id']\n",
311-
"semantic_session.drop(bad_key)\n",
311+
"semantic_history.drop(bad_key)\n",
312312
"\n",
313-
"corrected_context = semantic_session.get_recent()\n",
313+
"corrected_context = semantic_history.get_recent()\n",
314314
"for message in corrected_context:\n",
315315
" print(message)"
316316
]
@@ -321,7 +321,8 @@
321321
"metadata": {},
322322
"outputs": [],
323323
"source": [
324-
"chat_session.clear()"
324+
"chat_history.clear()\n",
325+
"semantic_history.clear()"
325326
]
326327
}
327328
],

docs/user_guide/index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ User guides provide helpful resources for using RedisVL and its different compon
1919
04_vectorizers
2020
05_hash_vs_json
2121
06_rerankers
22-
07_session_manager
22+
07_message_history
2323
08_semantic_router
2424
09_threshold_optimization
2525
release_guide/index

redisvl/extensions/constants.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
"""
2-
Constants used within the extension classes SemanticCache, BaseSessionManager,
3-
StandardSessionManager,SemanticSessionManager and SemanticRouter.
2+
Constants used within the extension classes SemanticCache, BaseMessageHistory,
3+
MessageHistory, SemanticMessageHistory and SemanticRouter.
44
These constants are also used within theses classes corresponding schema.
55
"""
66

7-
# BaseSessionManager
7+
# BaseMessageHistory
88
ID_FIELD_NAME: str = "entry_id"
99
ROLE_FIELD_NAME: str = "role"
1010
CONTENT_FIELD_NAME: str = "content"
1111
TOOL_FIELD_NAME: str = "tool_call_id"
1212
TIMESTAMP_FIELD_NAME: str = "timestamp"
1313
SESSION_FIELD_NAME: str = "session_tag"
1414

15-
# SemanticSessionManager
16-
SESSION_VECTOR_FIELD_NAME: str = "vector_field"
15+
# SemanticMessageHistory
16+
MESSAGE_VECTOR_FIELD_NAME: str = "vector_field"
1717

1818
# SemanticCache
1919
REDIS_KEY_FIELD_NAME: str = "key"
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
from redisvl.extensions.message_history.base_history import BaseMessageHistory
2+
from redisvl.extensions.message_history.message_history import MessageHistory
3+
from redisvl.extensions.message_history.semantic_history import SemanticMessageHistory
4+
5+
__all__ = ["BaseMessageHistory", "MessageHistory", "SemanticMessageHistory"]

0 commit comments

Comments
 (0)