You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _ingest-pipelines/processors/sparse-encoding.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -162,4 +162,4 @@ Once you have created an ingest pipeline, you need to create an index for ingest
162
162
- To learn how to use the `neural_sparse` query for a sparse search, see [Neural sparse query]({{site.url}}{{site.baseurl}}/query-dsl/specialized/neural-sparse/).
163
163
- To learn more about sparse search, see [Neural sparse search]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/).
164
164
- To learn more about using models in OpenSearch, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model).
165
-
- For a comprehensive example, see [Neural search tutorial]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/).
165
+
- For a comprehensive example, see [Getting started with semantic and hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/).
Copy file name to clipboardExpand all lines: _ingest-pipelines/processors/text-embedding.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -132,4 +132,4 @@ Once you have created an ingest pipeline, you need to create an index for ingest
132
132
- To learn how to use the `neural` query for text search, see [Neural query]({{site.url}}{{site.baseurl}}/query-dsl/specialized/neural/).
133
133
- To learn more about semantic search, see [Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/).
134
134
- To learn more about using models in OpenSearch, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model).
135
-
- For a comprehensive example, see [Neural search tutorial]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/).
135
+
- For a comprehensive example, see [Getting started with semantic and hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/).
Copy file name to clipboardExpand all lines: _ingest-pipelines/processors/text-image-embedding.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -138,4 +138,4 @@ Once you have created an ingest pipeline, you need to create an index for ingest
138
138
- To learn how to use the `neural` query for a multimodal search, see [Neural query]({{site.url}}{{site.baseurl}}/query-dsl/specialized/neural/).
139
139
- To learn more about multimodal search, see [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/).
140
140
- To learn more about using models in OpenSearch, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model).
141
-
- For a comprehensive example, see [Neural search tutorial]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/).
141
+
- For a comprehensive example, see [Getting started with semantic and hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/).
Copy file name to clipboardExpand all lines: _query-dsl/compound/hybrid.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -11,9 +11,9 @@ You can use a hybrid query to combine relevance scores from multiple queries int
11
11
12
12
## Example
13
13
14
-
Learn how to use the `hybrid` query by following the steps in [Using hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/#using-hybrid-search).
14
+
Learn how to use the `hybrid` query by following the steps in [Hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/).
15
15
16
-
For a comprehensive example, follow the [Neural search tutorial]({{site.url}}{{site.baseurl}}/ml-commons-plugin/semantic-search#tutorial).
16
+
For a comprehensive example, follow the [Getting started with semantic and hybrid search]({{site.url}}{{site.baseurl}}/ml-commons-plugin/semantic-search#tutorial).
Copy file name to clipboardExpand all lines: _search-plugins/search-pipelines/explanation-processor.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -30,7 +30,7 @@ Field | Data type | Description
30
30
31
31
The following example demonstrates using a search pipeline with a `hybrid_score_explanation` processor.
32
32
33
-
For a comprehensive example, follow the [Neural search tutorial]({{site.url}}{{site.baseurl}}/ml-commons-plugin/semantic-search#tutorial).
33
+
For a comprehensive example, follow the [Getting started with semantic and hybrid search]({{site.url}}{{site.baseurl}}/ml-commons-plugin/semantic-search#tutorial).
34
34
35
35
### Creating a search pipeline
36
36
@@ -217,4 +217,4 @@ GET /my-nlp-index/_search?search_pipeline=nlp-search-pipeline&explain=true
217
217
...
218
218
```
219
219
220
-
For more information about setting up hybrid search, see [Using hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/#using-hybrid-search).
220
+
For more information about setting up hybrid search, see [Hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/).
Copy file name to clipboardExpand all lines: _search-plugins/search-pipelines/normalization-processor.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ Field | Data type | Description
42
42
43
43
The following example demonstrates using a search pipeline with a `normalization-processor`.
44
44
45
-
For a comprehensive example, follow the [Neural search tutorial]({{site.url}}{{site.baseurl}}/ml-commons-plugin/semantic-search#tutorial).
45
+
For a comprehensive example, follow the [Getting started with semantic and hybrid search]({{site.url}}{{site.baseurl}}/ml-commons-plugin/semantic-search#tutorial).
46
46
47
47
### Creating a search pipeline
48
48
@@ -112,7 +112,7 @@ GET /my-nlp-index/_search?search_pipeline=nlp-search-pipeline
112
112
```
113
113
{% include copy-curl.html %}
114
114
115
-
For more information about setting up hybrid search, see [Using hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/#using-hybrid-search).
115
+
For more information about setting up hybrid search, see [Hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/).
Copy file name to clipboardExpand all lines: _search-plugins/search-pipelines/rag-processor.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -99,4 +99,4 @@ GET /my_rag_test_data/_search?search_pipeline=rag_pipeline
99
99
```
100
100
{% include copy-curl.html %}
101
101
102
-
For more information about setting up conversational search, see [Using conversational search]({{site.url}}{{site.baseurl}}/search-plugins/conversational-search/#using-conversational-search).
102
+
For more information about setting up conversational search, see [Conversational search with RAG]({{site.url}}{{site.baseurl}}/search-plugins/conversational-search/).
Copy file name to clipboardExpand all lines: _search-plugins/search-pipelines/score-ranker-processor.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -67,7 +67,7 @@ PUT /_search/pipeline/<rrf-pipeline>
67
67
}
68
68
```
69
69
70
-
For more information about setting up hybrid search, see [Using hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/#using-hybrid-search).
70
+
For more information about setting up hybrid search, see [Hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/).
Copy file name to clipboardExpand all lines: _tutorials/gen-ai/ai-search-flows/building-flows.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ redirect_from:
10
10
11
11
# Creating and customizing AI search workflows in OpenSearch Dashboards
12
12
13
-
This tutorial shows you how to build automated AI search flows in OpenSearch Dashboards.
13
+
This tutorial shows you how to build automated AI search workflows in OpenSearch Dashboards. For more information, see [Building AI search workflows in OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/vector-search/ai-search/workflow-builder/).
14
14
15
15
## Prerequisite: Provision ML resources
16
16
@@ -355,7 +355,7 @@ Configure an ML inference search request processor and a normalization processor
355
355
```
356
356
{% include copy.html %}
357
357
358
-
**For the normalization processor**, configure weights for each subquery. For more information, see the [hybrid search normalization processor example]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/#step-4-configure-a-search-pipeline).
358
+
**For the normalization processor**, configure weights for each subquery. For more information, see the [hybrid search normalization processor example]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/#step-3-configure-a-search-pipeline).
Copy file name to clipboardExpand all lines: _tutorials/reranking/reranking-by-field.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ redirect_from:
10
10
11
11
# Reranking search results by a field
12
12
13
-
Starting with OpenSearch 2.18, you can rerank search [results by a field]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/rerank-processor/#the-by_field-rerank-type). This feature is useful when your documents include a field that is particularly important or when you want to rerank results from an externally hosted model.
13
+
Starting with OpenSearch 2.18, you can rerank search [results by a field]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/rerank-processor/#the-by_field-rerank-type). This feature is useful when your documents include a field that is particularly important or when you want to rerank results from an externally hosted model. For more information, see [Reranking search results by a field]({{site.url}}{{site.baseurl}}/search-plugins/search-relevance/rerank-by-field/).
14
14
15
15
This tutorial explains how to use the [Cohere Rerank](https://docs.cohere.com/reference/rerank-1) model to rerank search results by a field in self-managed OpenSearch and in [Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/).
Copy file name to clipboardExpand all lines: _tutorials/vector-search/neural-search-tutorial.md
+64-16
Original file line number
Diff line number
Diff line change
@@ -67,21 +67,19 @@ For a [custom local model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/custom
67
67
68
68
For more information about ML-related cluster settings, see [ML Commons cluster settings]({{site.url}}{{site.baseurl}}/ml-commons-plugin/cluster-settings/).
69
69
70
-
## Tutorial overview
70
+
## Tutorial
71
71
72
72
This tutorial consists of the following steps:
73
73
74
74
{% include list.html list_items=page.steps%}
75
75
76
-
Some steps in the tutorial contain optional `Test it` sections. You can ensure that the step completed successfully by running requests in these sections.
77
-
78
-
After you're done, follow the steps in the [Clean up](#clean-up) section to delete all created components.
76
+
You can follow this tutorial by using your command line or the OpenSearch Dashboards [Dev Tools console]({{site.url}}{{site.baseurl}}/dashboards/dev-tools/run-queries/).
79
77
80
-
## Tutorial
78
+
Some steps in the tutorial contain optional <span>Test it</span>{: .text-delta} sections. You can confirm that the step completed successfully by running the requests in these sections.
81
79
82
-
You can follow this tutorial by using your command line or the OpenSearch Dashboards [Dev Tools console]({{site.url}}{{site.baseurl}}/dashboards/dev-tools/run-queries/).
80
+
After you're done, follow the steps in the [Clean up](#clean-up) section to delete all created components.
83
81
84
-
## Step 1: Choose a model
82
+
###Step 1: Choose a model
85
83
86
84
First, you'll need to choose a language model in order to generate vector embeddings from text fields, both at ingestion time and query time.
87
85
@@ -106,7 +104,7 @@ Alternatively, you can choose one of the following options for your model:
106
104
107
105
For information about choosing a model, see [Further reading](#further-reading).
108
106
109
-
## Step 2: Register and deploy the model
107
+
###Step 2: Register and deploy the model
110
108
111
109
To register the model, provide the model group ID in the register request:
112
110
@@ -286,11 +284,11 @@ GET /_plugins/_ml/profile/models
286
284
```
287
285
</details>
288
286
289
-
## Step 3: Ingest data
287
+
###Step 3: Ingest data
290
288
291
289
OpenSearch uses a language model to transform text into vector embeddings. During ingestion, OpenSearch creates vector embeddings for the text fields in the request. During search, you can generate vector embeddings for the query text by applying the same model, allowing you to perform vector similarity search on the documents.
292
290
293
-
### Step 3(a): Create an ingest pipeline
291
+
####Step 3(a): Create an ingest pipeline
294
292
295
293
Now that you have deployed a model, you can use this model to configure an [ingest pipeline]({{site.url}}{{site.baseurl}}/api-reference/ingest-apis/index/) that contains one processor: a task that transforms document fields before documents are ingested into an index. In this example, you'll set up a `text_embedding` processor that creates vector embeddings from text. You'll need the `model_id` of the model you set up in the previous section and a `field_map`, which specifies the name of the field from which to take the text (`text`) and the name of the field in which to record embeddings (`passage_embedding`):
296
294
@@ -346,7 +344,7 @@ The response contains the ingest pipeline:
346
344
```
347
345
</details>
348
346
349
-
### Step 3(b): Create a vector index
347
+
####Step 3(b): Create a vector index
350
348
351
349
Now you'll create a vector index with a field named `text`, which contains an image description, and a [`knn_vector`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/) field named `passage_embedding`, which contains the vector embedding of the text. Additionally, set the default ingest pipeline to the `nlp-ingest-pipeline` you created in the previous step:
352
350
@@ -399,7 +397,7 @@ GET /my-nlp-index/_mappings
399
397
400
398
</details>
401
399
402
-
### Step 3(c): Ingest documents into the index
400
+
####Step 3(c): Ingest documents into the index
403
401
404
402
In this step, you'll ingest several sample documents into the index. The sample data is taken from the [Flickr image dataset](https://www.kaggle.com/datasets/hsankesara/flickr-image-dataset). Each document contains a `text` field corresponding to the image description and an `id` field corresponding to the image ID:
405
403
@@ -479,7 +477,7 @@ The response includes the document `_source` containing the original `text` and
479
477
}
480
478
```
481
479
482
-
## Step 4: Search the data
480
+
###Step 4: Search the data
483
481
484
482
Now you'll search the index using a keyword search, a semantic search, and a combination of the two.
485
483
@@ -708,7 +706,7 @@ PUT /_search/pipeline/nlp-search-pipeline
708
706
```
709
707
{% include copy-curl.html %}
710
708
711
-
#### Step 2: Search with the hybrid query
709
+
#### Step 2: Search using a hybrid query
712
710
713
711
You'll use the [`hybrid` query]({{site.url}}{{site.baseurl}}/query-dsl/compound/hybrid/) to combine the `match` and `neural` query clauses. Make sure to apply the previously created `nlp-search-pipeline` to the request in the query parameter:
714
712
@@ -838,7 +836,53 @@ You can now experiment with different weights, normalization techniques, and com
838
836
839
837
You can parameterize the search by using search templates. Search templates hide implementation details, reducing the number of nested levels and thus the query complexity. For more information, see [search templates]({{site.url}}{{site.baseurl}}/search-plugins/search-template/).
840
838
841
-
### Clean up
839
+
## Using automated workflows
840
+
841
+
You can quickly set up semantic or hybrid search using [_automated workflows_]({{site.url}}{{site.baseurl}}/automating-configurations/). This approach automatically creates and provisions all necessary resources. For more information, see [Workflow templates]({{site.url}}{{site.baseurl}}/automating-configurations/workflow-templates/).
842
+
843
+
### Automated semantic search setup
844
+
845
+
OpenSearch provides a [workflow template]({{site.url}}{{site.baseurl}}/automating-configurations/workflow-templates/) that automatically registers and deploys a default local model (`huggingface/sentence-transformers/paraphrase-MiniLM-L3-v2`) and creates an ingest pipeline and a vector index:
846
+
847
+
```json
848
+
POST /_plugins/_flow_framework/workflow?use_case=semantic_search_with_local_model&provision=true
849
+
```
850
+
{% include copy-curl.html %}
851
+
852
+
Review the semantic search workflow template [defaults](https://github.com/opensearch-project/flow-framework/blob/main/src/main/resources/defaults/semantic-search-with-local-model-defaults.json) to determine whether you need to update any of the parameters. For example, if you want to use a different model, specify the model name in the request body:
853
+
854
+
```json
855
+
POST /_plugins/_flow_framework/workflow?use_case=semantic_search_with_local_model&provision=true
OpenSearch responds with a workflow ID for the created workflow:
863
+
864
+
```json
865
+
{
866
+
"workflow_id" : "U_nMXJUBq_4FYQzMOS4B"
867
+
}
868
+
```
869
+
870
+
To check the workflow status, send the following request:
871
+
872
+
```json
873
+
GET /_plugins/_flow_framework/workflow/U_nMXJUBq_4FYQzMOS4B/_status
874
+
```
875
+
{% include copy-curl.html %}
876
+
877
+
Once the workflow completes, the `state` changes to `COMPLETED`. The workflow runs the following steps:
878
+
879
+
1.[Step 2](#step-2-register-and-deploy-the-model) to register and deploy the model.
880
+
1.[Step 3(a)](#step-3a-create-an-ingest-pipeline) to create an ingest pipeline.
881
+
1.[Step 3(b)](#step-3b-create-a-vector-index) to create a vector index.
882
+
883
+
You can now continue with [Step 3(c)](#step-3c-ingest-documents-into-the-index) to ingest documents into the index and [Step 4](#step-4-search-the-data) to search your data.
884
+
885
+
## Clean up
842
886
843
887
After you're done, delete the components you've created in this tutorial from the cluster:
- Read about the basics of OpenSearch semantic search in [Building a semantic search engine in OpenSearch](https://opensearch.org/blog/semantic-search-solutions/).
878
-
- Read about the combining keyword and semantic search, the normalization and combination technique options, and benchmarking tests in [The ABCs of semantic search in OpenSearch: Architectures, benchmarks, and combination strategies](https://opensearch.org/blog/semantic-science-benchmarks/).
922
+
- Read about the combining keyword and semantic search, the normalization and combination technique options, and benchmarking tests in [The ABCs of semantic search in OpenSearch: Architectures, benchmarks, and combination strategies](https://opensearch.org/blog/semantic-science-benchmarks/).
923
+
924
+
## Next steps
925
+
926
+
- Explore [AI search]({{site.url}}{{site.baseurl}}/vector-search/ai-search/index/) in OpenSearch.
Copy file name to clipboardExpand all lines: _tutorials/vector-search/semantic-search/semantic-search-asymmetric.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ redirect_from:
10
10
11
11
# Semantic search using an asymmetric embedding model
12
12
13
-
This tutorial shows you how to perform semantic search by generating text embeddings using an asymmetric embedding model. The tutorial uses the multilingual `intfloat/multilingual-e5-small` model from Hugging Face.
13
+
This tutorial shows you how to perform semantic search by generating text embeddings using an asymmetric embedding model. The tutorial uses the multilingual `intfloat/multilingual-e5-small` model from Hugging Face. For more information, see [Semantic search]({{site.url}}{{site.baseurl}}/vector-search/ai-search/semantic-search/).
14
14
15
15
Replace the placeholders beginning with the prefix `your_` with your own values.
Copy file name to clipboardExpand all lines: _tutorials/vector-search/semantic-search/semantic-search-bedrock-cohere.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ redirect_from:
10
10
11
11
# Semantic search using Cohere Embed on Amazon Bedrock
12
12
13
-
This tutorial shows you how to implement semantic search in [Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/) using the [Cohere Embed model](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-embed.html).
13
+
This tutorial shows you how to implement semantic search in [Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/) using the [Cohere Embed model](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-embed.html). For more information, see [Semantic search]({{site.url}}{{site.baseurl}}/vector-search/ai-search/semantic-search/).
14
14
15
15
If you are using self-managed OpenSearch instead of Amazon OpenSearch Service, create a connector to the model on Amazon Bedrock using [the blueprint](https://github.com/opensearch-project/ml-commons/blob/2.x/docs/remote_inference_blueprints/bedrock_connector_cohere_cohere.embed-english-v3_blueprint.md). For more information about creating a connector, see [Connectors]({{site.url}}{{site.baseurl}}/ml-commons-plugin/remote-models/connectors/).
0 commit comments