You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/assistant/security-assistant.asciidoc
+15-51
Original file line number
Diff line number
Diff line change
@@ -8,19 +8,18 @@
8
8
9
9
The Elastic AI Assistant utilizes generative AI to bolster your cybersecurity operations team. It allows users to interact with {elastic-sec} for tasks such as alert investigation, incident response, and query generation or conversion using natural language and much more.
10
10
11
-
AI Assistant relies on generative AI connectors to communicate with third-party AI providers.
11
+
AI Assistant can connect to multiple LLM providers so you can select the best model for your needs.
12
12
13
13
[role="screenshot"]
14
-
image::images/assistant.gif[Animation of AI Assistant chat window,90%]
14
+
image::images/assistant-basic-view.png[Image of AI Assistant chat window,90%]
15
15
16
-
IMPORTANT: The Elastic AI Assistant is designed to enhance your analysis with smart dialogues. Its capabilities are still developing. Users should exercise caution as the quality of its responses might vary. Your insights and feedback will help us improve this feature. Always cross-verify AI-generated advice for accuracy.
16
+
WARNING: The Elastic AI Assistant is designed to enhance your analysis with smart dialogues. Its capabilities are still developing. Users should exercise caution as the quality of its responses might vary. Your insights and feedback will help us improve this feature. Always cross-verify AI-generated advice for accuracy.
17
17
18
-
[TIP]
19
-
====
20
-
When choosing a third-party provider to use with AI Assistant, remember that different services may impose rate limits on their APIs. This may negatively affect AI Assistant performance. In addition, different models support various context lengths. Models with larger context lengths will provide a better experience when using the AI Assistant.
21
-
22
-
For example, refer to OpenAI's documentation on https://platform.openai.com/docs/guides/rate-limits/[rate limits] and https://help.openai.com/en/articles/7127966-what-is-the-difference-between-the-gpt-4-models[GPT-4 models] for more information on their available options.
23
-
====
18
+
.Recommended models
19
+
[sidebar]
20
+
--
21
+
While AI Assistant is compatible with many different models, our testing found increased quality with Azure 32k, and faster and more cost-effective responses with Claude 3 Haiku and OpenAI GPT4 Turbo.
22
+
--
24
23
25
24
.Requirements
26
25
[sidebar]
@@ -53,37 +52,7 @@ NOTE: Elastic can automatically anonymize event data that you provide to AI Assi
53
52
54
53
You must create a generative AI connector before you can use AI Assistant.
55
54
56
-
. Open AI Assistant *Cmd + ;* (or *Ctrl + ;* on Windows), and click **Connector** -> **Add new Connector**.
57
-
. Select either *Amazon Bedrock* or *OpenAI*.
58
-
. Enter the authentication details required for your chosen connector type, then click *Save*.
59
-
60
-
For OpenAI and Azure OpenAI Service, you need to provide an API key. For Amazon Bedrock, you need to provide an access key and secret for an IAM user with at least the following permissions:
61
-
62
-
.Click to expand permissions JSON
63
-
[%collapsible]
64
-
====
65
-
```
66
-
{
67
-
"Version": "2012-10-17",
68
-
"Statement": [
69
-
{
70
-
"Sid": "VisualEditor0",
71
-
"Effect": "Allow",
72
-
"Action": [
73
-
"bedrock:InvokeModel",
74
-
"bedrock:InvokeModelWithResponseStream"
75
-
],
76
-
"Resource": "*"
77
-
}
78
-
]
79
-
}
80
-
```
81
-
====
82
-
83
-
For Amazon Bedrock, only Anthropic models are supported: Claude and Claude Instant. You need to https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html#manage-model-access[enable them in AWS] before setting up an Amazon Bedrock connector. You can configure an Amazon Bedrock connector to use any https://docs.aws.amazon.com/general/latest/gr/bedrock.html[AWS region where Anthropic models are supported] by editing the *URL* field under *Connector settings*, for example by changing `us-west-2` to `eu-central-1`.
84
-
85
-
For more information about setting up generative AI connectors, refer to {kibana-ref}/openai-action-type.html[OpenAI connector] or {kibana-ref}/bedrock-action-type.html[Amazon Bedrock connector].
86
-
55
+
For more information about setting up generative AI connectors, refer to <<assistant-connect-to-bedrock>>, <<assistant-connect-to-openai>>, or <<assistant-connect-to-azure-openai>>.
87
56
88
57
[discrete]
89
58
[[start-chatting]]
@@ -103,7 +72,7 @@ You can also chat with AI Assistant from several particular pages in {elastic-se
103
72
* <<data-quality-dash, Data Quality dashboard>>: Select the *Incompatible fields* tab, then click *Chat*. (This is only available for fields marked red, indicating they're incompatible).
104
73
* <<timelines-ui, Timeline>>: Select the *Security Assistant* tab.
105
74
106
-
NOTE: All chat history and custom quick prompts persist in local browser storage, allowing you to navigate away to other areas in {elastic-sec}, then return to ongoing conversations. This also means that chats persist across multiple users if they use the same browser; be sure clear any chats that you don't want available to other users.
75
+
NOTE: Each user's chat history and custom quick prompts are automatically saved, so you can leave ((elastic-sec)) and return to pick up a conversation later.
107
76
108
77
[discrete]
109
78
[[interact-with-assistant]]
@@ -125,15 +94,14 @@ image::images/quick-prompts.png[Quick prompts highlighted below a conversation,9
125
94
+
126
95
Quick prompt availability varies based on context — for example, the **Alert summarization** quick prompt appears when you open AI Assistant while viewing an alert. To customize existing quick prompts and create new ones, click *Add Quick prompt*.
127
96
128
-
* Use these buttons to perform actions in the conversation history and prompt entry area:
97
+
* In an active conversation, you can use the inline actions that appear on messages to incorporate AI Assistant's responses into your workflows:
129
98
130
99
** *Add note to timeline* (image:images/icon-add-note.png[Add note icon,16,16]): Add the selected text to your currently active Timeline as a note.
131
100
** *Add to existing case* (image:images/icon-add-to-case.png[Add to case icon,19,16]): Add a comment to an existing case using the selected text.
132
101
** *Copy to clipboard* (image:images/icon-copy.png[Copy to clipboard icon,17,18]): Copy the text to clipboard to paste elsewhere. Also helpful for resubmitting a previous prompt.
133
102
** *Add to timeline* (image:images/icon-add-to-timeline.png[Add to timeline icon,17,18]): Add a filter or query to Timeline using the text. This button appears for particular queries in AI Assistant's responses.
134
103
+
135
104
TIP: Be sure to specify which language you'd like AI Assistant to use when writing a query. For example: "Can you generate an Event Query Language query to find four failed logins followed by a successful login?"
136
-
** *Clear chat* (image:images/icon-clear-red.png[Red X icon,16,16]): Delete the conversation history and start a new chat.
137
105
138
106
[discrete]
139
107
[[configure-ai-assistant]]
@@ -145,8 +113,8 @@ image::images/assistant-settings-menu.png[AI Assistant's settings menu, open to
145
113
146
114
The *Settings* menu has the following tabs:
147
115
148
-
* **Conversations:** When you open AI Assistant from certain pages, such as Timeline or Alerts, it defaults to the relevant conversation type. Choose the default system prompt for each conversation type, the connector, and model (if applicable).
149
-
* **Quick Prompts:** Modify existing quick prompts or create new ones. To create a new quick prompt, type a unique name in the *Name* field, then press *enter*. Under *Prompt*, enter or update the quick prompt's text. Under *Contexts*, select where the quick prompt should appear.
116
+
* **Conversations:** When you open AI Assistant from certain pages, such as Timeline or Alerts, it defaults to the relevant conversation type. Choose the default system prompt for each conversation type, the connector, and model (if applicable). The **Streaming** setting controls whether AI Assistant's responses appear word-by-word (streamed), or as a complete block of text. Streaming is currently only available for OpenAI models.
117
+
* **Quick Prompts:** Modify existing quick prompts or create new ones. To create a new quick prompt, type a unique name in the *Name* field, then press *enter*. Under *Prompt*, enter or update the quick prompt's text.
150
118
* **System Prompts:** Edit existing system prompts or create new ones. To create a new system prompt, type a unique name in the *Name* field, then press *enter*. Under *Prompt*, enter or update the system prompt's text. Under *Contexts*, select where the system prompt should appear.
151
119
+
152
120
NOTE: To delete a custom prompt, open the *Name* drop-down menu, hover over the prompt you want to delete, and click the *X* that appears. You cannot delete the default prompts.
@@ -159,20 +127,16 @@ NOTE: To delete a custom prompt, open the *Name* drop-down menu, hover over the
159
127
[[ai-assistant-anonymization]]
160
128
=== Anonymization
161
129
162
-
The **Anonymization** tab of the AI Assistant settings menu allows you to define default data anonymization behavior for events you send to AI Assistant. You can update these settings for individual events when you include them in the chat.
130
+
The **Anonymization** tab of the AI Assistant settings menu allows you to define default data anonymization behavior for events you send to AI Assistant. Fields with **Allowed** toggled on are included in events provided to AI Assistant. **Allowed** fields with **Anonymized** set to **Yes** are included, but with their values obfuscated.
163
131
164
132
[role="screenshot"]
165
133
image::images/assistant-anonymization-menu.png[AI Assistant's settings menu, open to the Anonymization tab]
166
134
167
135
The fields on this list are among those most likely to provide relevant context to AI Assistant. Fields with *Allowed* toggled on are included. *Allowed* fields with *Anonymized* set to *Yes* are included, but with their values obfuscated.
168
136
169
-
[role="screenshot"]
170
-
image::images/add-alert-context.gif[A video that shows an alert being added as context to an AI Assistant chat message]
171
-
172
-
When you include a particular event as context, you can use a similar interface to adjust anonymization behavior. Be sure the anonymization behavior meets your specifications before sending a message with the event attached.
173
-
174
137
The *Show anonymized* toggle controls whether you see the obfuscated or plaintext versions of the fields you sent to AI Assistant. It doesn't control what gets obfuscated — that's determined by the anonymization settings. It also doesn't affect how event fields appear _before_ being sent to AI Assistant. Instead, it controls how fields that were already sent and obfuscated appear to you.
175
138
139
+
When you include a particular event as context, such as an alert from the Alerts page, you can adjust anonymization behavior for the specific event. Be sure the anonymization behavior meets your specifications before sending a message with the event attached.
0 commit comments