Skip to content

Commit 760b287

Browse files
committed
Add descriptive text under section headings to explain included modules
1 parent 1757566 commit 760b287

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

assemblies/using-guardrails-for-ai-safety.adoc

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,19 @@ ifdef::context[:parent-context: {context}]
66
Use the Guardrails tools to ensure the safety and security of your generative AI applications in production.
77

88
== Detecting PII and sensitive data
9+
Protect user privacy by identifying and filtering personally identifiable information (PII) in LLM inputs and outputs using built-in regex detectors or custom detection models.
10+
911
include::modules/detecting-pii-by-using-guardrails-with-llama-stack.adoc[leveloffset=+1]
1012
include::modules/guardrails-filtering-flagged-content-by-sending-requests-to-the-regex-detector.adoc[leveloffset=+1]
1113

1214
== Securing prompts
15+
Prevent malicious prompt injection attacks by using specialized detectors to identify and block potentially harmful prompts before they reach your model.
16+
1317
include::modules/mitigating-prompt-injection-by-using-a-hugging-face-prompt-injection-detector.adoc[leveloffset=+1]
1418

1519
== Moderating and safeguarding content
20+
Filter toxic, hateful, or profane content from user inputs and model outputs to maintain safe and appropriate AI interactions.
21+
1622
include::modules/detecting-hateful-and-profane-language.adoc[leveloffset=+1]
1723
include::modules/guardrails-enforcing-configured-safety-pipelines-for-llm-inference-using-guardrails-gateway.adoc[leveloffset=+1]
1824

0 commit comments

Comments
 (0)