Skip to content

Commit 8d14cf3

Browse files
committed
Merge branch 'main' into preview-rc-rdi
2 parents c8eeda6 + 8587667 commit 8d14cf3

File tree

103 files changed

+3551
-863
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

103 files changed

+3551
-863
lines changed

config.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ rdi_redis_gears_version = "1.2.6"
5555
rdi_debezium_server_version = "2.3.0.Final"
5656
rdi_db_types = "cassandra|mysql|oracle|postgresql|sqlserver"
5757
rdi_cli_latest = "latest"
58-
rdi_current_version = "v1.6.5"
58+
rdi_current_version = "v1.6.6"
5959

6060
[params.clientsConfig]
6161
"Python"={quickstartSlug="redis-py"}

content/commands/json.arrappend/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ is JSONPath to specify. Default is root `$`.
6767

6868
## Return value
6969

70-
`JSON.ARRAPEND` returns an [array]({{< relref "develop/reference/protocol-spec#resp-arrays" >}}) of integer replies for each path, the array's new size, or `nil`, if the matching JSON value is not an array.
70+
`JSON.ARRAPPEND` returns an [array]({{< relref "develop/reference/protocol-spec#resp-arrays" >}}) of integer replies for each path, the array's new size, or `nil`, if the matching JSON value is not an array.
7171
For more information about replies, see [Redis serialization protocol specification]({{< relref "/develop/reference/protocol-spec" >}}).
7272

7373
## Examples
@@ -82,7 +82,7 @@ redis> JSON.SET item:1 $ '{"name":"Noise-cancelling Bluetooth headphones","descr
8282
OK
8383
{{< / highlight >}}
8484

85-
Add color `blue` to the end of the `colors` array. `JSON.ARRAPEND` returns the array's new size.
85+
Add color `blue` to the end of the `colors` array. `JSON.ARRAPPEND` returns the array's new size.
8686

8787
{{< highlight bash >}}
8888
redis> JSON.ARRAPPEND item:1 $.colors '"blue"'

content/commands/json.debug-memory/index.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ Get the values' memory usage in bytes.
7272

7373
{{< highlight bash >}}
7474
redis> JSON.DEBUG MEMORY item:2
75-
(integer) 253
75+
(integer) 573
7676
{{< / highlight >}}
7777
</details>
7878

@@ -84,4 +84,3 @@ redis> JSON.DEBUG MEMORY item:2
8484

8585
* [RedisJSON]({{< relref "/develop/data-types/json/" >}})
8686
* [Index and search JSON documents]({{< relref "/develop/interact/search-and-query/indexing/" >}})
87-

content/develop/ai/index.md

Lines changed: 30 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,13 @@ This page organized into a few sections depending on what you’re trying to do:
3434
1. [**Search with vectors**]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#search-with-vectors" >}}): Redis supports several advanced querying strategies with vector fields including k-nearest neighbor ([KNN]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#knn-vector-search" >}})), [vector range queries]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#vector-range-queries" >}}), and [metadata filters]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#filters" >}}).
3535
1. [**Configure vector queries at runtime**]({{< relref "develop/interact/search-and-query/advanced-concepts/vectors#runtime-query-parameters" >}}). Select the best filter mode to optimize query execution.
3636

37+
#### Learn how to index and query vector embeddings
38+
* [redis-py (Python)]({{< relref "/develop/clients/redis-py/vecsearch" >}})
39+
* [NRedisStack (C#/.NET)]({{< relref "/develop/clients/dotnet/vecsearch" >}})
40+
* [node-redis (JavaScript)]({{< relref "/develop/clients/nodejs/vecsearch" >}})
41+
* [Jedis (Java)]({{< relref "/develop/clients/jedis/vecsearch" >}})
42+
* [go-redis (Go)]({{< relref "/develop/clients/go/vecsearch" >}})
43+
3744
## Concepts
3845

3946
Learn to perform vector search and use gateways and semantic caching in your AI/ML projects.
@@ -44,7 +51,7 @@ Learn to perform vector search and use gateways and semantic caching in your AI/
4451

4552
## Quickstarts
4653

47-
Quickstarts or recipes are useful when you are trying to build specific functionality. For example, you might want to do RAG with LangChain or set up LLM memory for you AI agent. Get started with the following Redis Python notebooks.
54+
Quickstarts or recipes are useful when you are trying to build specific functionality. For example, you might want to do RAG with LangChain or set up LLM memory for your AI agent. Get started with the following Redis Python notebooks.
4855

4956
* [The place to start if you are brand new to Redis](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/redis-intro/00_redis_intro.ipynb)
5057

@@ -53,6 +60,7 @@ Vector search retrieves results based on the similarity of high-dimensional nume
5360
* [Implementing hybrid search with Redis](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/02_hybrid_search.ipynb)
5461
* [Vector search with Redis Python client](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/00_redispy.ipynb)
5562
* [Vector search with Redis Vector Library](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/01_redisvl.ipynb)
63+
* [Shows how to convert a float 32 index to float16 or integer data types](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/vector-search/03_dtype_support.ipynb)
5664

5765
#### RAG
5866
Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The retrieval part of RAG is supported by a vector database, which can return semantically relevant results to a user’s query, serving as contextual information to augment the generative capabilities of an LLM.
@@ -65,12 +73,15 @@ Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability o
6573
* [Vector search with Azure](https://techcommunity.microsoft.com/blog/azuredevcommunityblog/vector-similarity-search-with-azure-cache-for-redis-enterprise/3822059)
6674
* [RAG with Spring AI](https://redis.io/blog/building-a-rag-application-with-redis-and-spring-ai/)
6775
* [RAG with Vertex AI](https://github.com/redis-developer/gcp-redis-llm-stack/tree/main)
68-
* [Notebook for additional tips and techniques to improve RAG quality](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/04_advanced_redisvl.ipynb)
76+
* [Notebook for additional tips and techniques to improve RAG quality](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/04_advanced_redisvl.ipynb)
77+
* [Implement a simple RBAC policy with vector search using Redis](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/RAG/07_user_role_based_rag.ipynb)
6978

7079
#### Agents
7180
AI agents can act autonomously to plan and execute tasks for the user.
81+
* [Redis Notebooks for LangGraph](https://github.com/redis-developer/langgraph-redis/tree/main/examples)
7282
* [Notebook to get started with LangGraph and agents](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/agents/00_langgraph_redis_agentic_rag.ipynb)
7383
* [Build a collaborative movie recommendation system using Redis for data storage, CrewAI for agent-based task execution, and LangGraph for workflow management.](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/agents/01_crewai_langgraph_redis.ipynb)
84+
* [Full-Featured Agent Architecture](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/agents/02_full_featured_agent.ipynb)
7485

7586
#### LLM memory
7687
LLMs are stateless. To maintain context within a conversation chat sessions must be stored and resent to the LLM. Redis manages the storage and retrieval of chat sessions to maintain context and conversational relevance.
@@ -81,14 +92,24 @@ LLMs are stateless. To maintain context within a conversation chat sessions must
8192
An estimated 31% of LLM queries are potentially redundant. Redis enables semantic caching to help cut down on LLM costs quickly.
8293
* [Build a semantic cache using the Doc2Cache framework and Llama3.1](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/semantic-cache/doc2cache_llama3_1.ipynb)
8394
* [Build a semantic cache with Redis and Google Gemini](https://colab.research.google.com/github/redis-developer/redis-ai-resources/blob/main/python-recipes/semantic-cache/semantic_caching_gemini.ipynb)
95+
* [Optimize semantic cache threshold with RedisVL](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/semantic-cache/02_semantic_cache_optimization.ipynb)
96+
97+
#### Semantic routing
98+
Routing is a simple and effective way of preventing misuses with your AI application or for creating branching logic between data sources etc.
99+
* [Simple examples of how to build an allow/block list router in addition to a multi-topic router](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/semantic-router/00_semantic_routing.ipynb)
100+
* [Use RouterThresholdOptimizer from redisvl to setup best router config](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/semantic-router/01_routing_optimization.ipynb)
84101

85102
#### Computer vision
86103
Build a facial recognition system using the Facenet embedding model and RedisVL.
87104
* [Facial recognition](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/computer-vision/00_facial_recognition_facenet.ipynb)
88105

89106
#### Recommendation systems
90107
* [Intro content filtering example with redisvl](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/recommendation-systems/00_content_filtering.ipynb)
91-
* [Intro collaborative filtering example with redisvl](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/recommendation-systems/01_collaborative_filtering.ipynb)
108+
* [Intro collaborative filtering example with redisvl](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/recommendation-systems/01_collaborative_filtering.ipynb)
109+
* [Intro deep learning two tower example with redisvl](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/recommendation-systems/02_two_towers.ipynb)
110+
111+
#### Feature store
112+
* [Credit scoring system using Feast with Redis as the online store](https://github.com/redis-developer/redis-ai-resources/blob/main/python-recipes/feature-store/00_feast_credit_score.ipynb)
92113

93114
## Tutorials
94115
Need a deeper-dive through different use cases and topics?
@@ -97,13 +118,16 @@ Need a deeper-dive through different use cases and topics?
97118
* [Agentic RAG](https://github.com/redis-developer/agentic-rag) - A tutorial focused on agentic RAG with LlamaIndex and Amazon Bedrock
98119
* [RAG on Vertex AI](https://github.com/redis-developer/gcp-redis-llm-stack/tree/main) - A RAG tutorial featuring Redis with Vertex AI
99120
* [RAG workbench](https://github.com/redis-developer/redis-rag-workbench) - A development playground for exploring RAG techniques with Redis
121+
* [ArXiv Chat](https://github.com/redis-developer/ArxivChatGuru) - Streamlit demo of RAG over ArXiv documents with Redis & OpenAI
100122

101-
#### Recommendation system
123+
#### Recommendations and search
102124
* [Recommendation systems w/ NVIDIA Merlin & Redis](https://github.com/redis-developer/redis-nvidia-recsys) - Three examples, each escalating in complexity, showcasing the process of building a realtime recsys with NVIDIA and Redis
103125
* [Redis product search](https://github.com/redis-developer/redis-product-search) - Build a real-time product search engine using features like full-text search, vector similarity, and real-time data updates
126+
* [ArXiv Search](https://github.com/redis-developer/redis-arxiv-search) - Full stack implementation of Redis with React FE
104127

105128
## Ecosystem integrations
106129

130+
* [LangGraph & Redis: Build smarter AI agents with memory & persistence](https://redis.io/blog/langgraph-redis-build-smarter-ai-agents-with-memory-persistence/)
107131
* [Amazon Bedrock setup guide]({{< relref "/integrate/amazon-bedrock/set-up-redis" >}})
108132
* [LangChain Redis Package: Smarter AI apps with advanced vector storage and faster caching](https://redis.io/blog/langchain-redis-partner-package/)
109133
* [LlamaIndex integration for Redis as a vector store](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/RedisIndexDemo.html)
@@ -113,6 +137,8 @@ Need a deeper-dive through different use cases and topics?
113137
* [Deploy GenAI apps faster with Redis and NVIDIA NIM](https://redis.io/blog/use-redis-with-nvidia-nim-to-deploy-genai-apps-faster/)
114138
* [Building LLM Applications with Kernel Memory and Redis](https://redis.io/blog/building-llm-applications-with-kernel-memory-and-redis/)
115139
* [DocArray integration of Redis as a vector database by Jina AI](https://docs.docarray.org/user_guide/storing/index_redis/)
140+
* [Semantic Kernel: A popular library by Microsoft to integrate LLMs with plugins](https://learn.microsoft.com/en-us/semantic-kernel/concepts/vector-store-connectors/out-of-the-box-connectors/redis-connector?pivots=programming-language-csharp)
141+
* [LiteLLM integration](https://docs.litellm.ai/docs/caching/all_caches#initialize-cache---in-memory-redis-s3-bucket-redis-semantic-disk-cache-qdrant-semantic)
116142

117143
## Benchmarks
118144
See how we stack up against the competition.

content/develop/clients/dotnet/vecsearch.md

Lines changed: 156 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,9 @@ In the example below, we use [Microsoft.ML](https://dotnet.microsoft.com/en-us/a
3232
to generate the vector embeddings to store and index with Redis Query Engine.
3333
We also show how to adapt the code to use
3434
[Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/embeddings?tabs=csharp)
35-
for the embeddings.
35+
for the embeddings. The code is first demonstrated for hash documents with a
36+
separate section to explain the
37+
[differences with JSON documents](#differences-with-json-documents).
3638

3739
## Initialize
3840

@@ -89,7 +91,6 @@ using Azure;
8991
using Azure.AI.OpenAI;
9092
```
9193

92-
9394
## Define a function to obtain the embedding model
9495

9596
{{< note >}}Ignore this step if you are using an Azure OpenAI
@@ -154,7 +155,9 @@ array as a `byte` string. To simplify this, we declare a
154155
then encodes the returned `float` array as a `byte` string. If you are
155156
storing your documents as JSON objects instead of hashes, then you should
156157
use the `float` array for the embedding directly, without first converting
157-
it to a `byte` string.
158+
it to a `byte` string (see [Differences with JSON documents](#differences-with-json-documents)
159+
below).
160+
158161

159162
```csharp
160163
static byte[] GetEmbedding(
@@ -414,6 +417,156 @@ As you would expect, the result for `doc:1` with the content text
414417
is the result that is most similar in meaning to the query text
415418
*"That is a happy person"*.
416419

420+
## Differences with JSON documents
421+
422+
Indexing JSON documents is similar to hash indexing, but there are some
423+
important differences. JSON allows much richer data modeling with nested fields, so
424+
you must supply a [path]({{< relref "/develop/data-types/json/path" >}}) in the schema
425+
to identify each field you want to index. However, you can declare a short alias for each
426+
of these paths to avoid typing it in full for
427+
every query. Also, you must specify `IndexType.JSON` with the `On()` option when you
428+
create the index.
429+
430+
The code below shows these differences, but the index is otherwise very similar to
431+
the one created previously for hashes:
432+
433+
```cs
434+
var jsonSchema = new Schema()
435+
.AddTextField(new FieldName("$.content", "content"))
436+
.AddTagField(new FieldName("$.genre", "genre"))
437+
.AddVectorField(
438+
new FieldName("$.embedding", "embedding"),
439+
VectorField.VectorAlgo.HNSW,
440+
new Dictionary<string, object>()
441+
{
442+
["TYPE"] = "FLOAT32",
443+
["DIM"] = "150",
444+
["DISTANCE_METRIC"] = "L2"
445+
}
446+
);
447+
448+
449+
db.FT().Create(
450+
"vector_json_idx",
451+
new FTCreateParams()
452+
.On(IndexDataType.JSON)
453+
.Prefix("jdoc:"),
454+
jsonSchema
455+
);
456+
```
457+
458+
An important difference with JSON indexing is that the vectors are
459+
specified using arrays of `float` instead of binary strings. This requires a modification
460+
to the `GetEmbedding()` function declared in
461+
[Define a function to generate an embedding](#define-a-function-to-generate-an-embedding)
462+
above:
463+
464+
```cs
465+
static float[] GetFloatEmbedding(
466+
PredictionEngine<TextData, TransformedTextData> model, string sentence
467+
)
468+
{
469+
// Call the prediction API to convert the text into embedding vector.
470+
var data = new TextData()
471+
{
472+
Text = sentence
473+
};
474+
475+
var prediction = model.Predict(data);
476+
477+
float[] floatArray = Array.ConvertAll(prediction.Features, x => (float)x);
478+
return floatArray;
479+
}
480+
```
481+
482+
You should make a similar modification to the `GetEmbeddingFromAzure()` function
483+
if you are using Azure OpenAI with JSON.
484+
485+
Use [`JSON().set()`]({{< relref "/commands/json.set" >}}) to add the data
486+
instead of [`HashSet()`]({{< relref "/commands/hset" >}}):
487+
488+
```cs
489+
var jSentence1 = "That is a very happy person";
490+
491+
var jdoc1 = new {
492+
content = jSentence1,
493+
genre = "persons",
494+
embedding = GetFloatEmbedding(predEngine, jSentence1),
495+
};
496+
497+
db.JSON().Set("jdoc:1", "$", jdoc1);
498+
499+
var jSentence2 = "That is a happy dog";
500+
501+
var jdoc2 = new {
502+
content = jSentence2,
503+
genre = "pets",
504+
embedding = GetFloatEmbedding(predEngine, jSentence2),
505+
};
506+
507+
db.JSON().Set("jdoc:2", "$", jdoc2);
508+
509+
var jSentence3 = "Today is a sunny day";
510+
511+
var jdoc3 = new {
512+
content = jSentence3,
513+
genre = "weather",
514+
embedding = GetFloatEmbedding(predEngine, jSentence3),
515+
};
516+
517+
db.JSON().Set("jdoc:3", "$", jdoc3);
518+
```
519+
520+
The query is almost identical to the one for the hash documents. This
521+
demonstrates how the right choice of aliases for the JSON paths can
522+
save you having to write complex queries. The only significant difference is
523+
that the `FieldName` objects created for the `ReturnFields()` option must
524+
include the JSON path for the field.
525+
526+
An important thing to notice
527+
is that the vector parameter for the query is still specified as a
528+
binary string (using the `GetEmbedding()` method), even though the data for
529+
the `embedding` field of the JSON was specified as a `float` array.
530+
531+
```cs
532+
var jRes = db.FT().Search("vector_json_idx",
533+
new Query("*=>[KNN 3 @embedding $query_vec AS score]")
534+
.AddParam("query_vec", GetEmbedding(predEngine, "That is a happy person"))
535+
.ReturnFields(
536+
new FieldName("$.content", "content"),
537+
new FieldName("$.score", "score")
538+
)
539+
.SetSortBy("score")
540+
.Dialect(2));
541+
542+
foreach (var doc in jRes.Documents) {
543+
var props = doc.GetProperties();
544+
var propText = string.Join(
545+
", ",
546+
props.Select(p => $"{p.Key}: '{p.Value}'")
547+
);
548+
549+
Console.WriteLine(
550+
$"ID: {doc.Id}, Properties: [\n {propText}\n]"
551+
);
552+
}
553+
```
554+
555+
Apart from the `jdoc:` prefixes for the keys, the result from the JSON
556+
query is the same as for hash:
557+
558+
```
559+
ID: jdoc:1, Properties: [
560+
score: '4.30777168274', content: 'That is a very happy person'
561+
]
562+
ID: jdoc:2, Properties: [
563+
score: '25.9752807617', content: 'That is a happy dog'
564+
]
565+
ID: jdoc:3, Properties: [
566+
score: '68.8638000488', content: 'Today is a sunny day'
567+
]
568+
```
569+
417570
## Learn more
418571

419572
See

0 commit comments

Comments
 (0)