Skip to content

[llmobs] Clean up setup documentation #30466

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 12 additions & 17 deletions config/_default/menus/main.en.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4604,36 +4604,26 @@ menu:
identifier: llm_obs
parent: ai_observability_heading
weight: 10000
- name: Terms and Concepts
url: llm_observability/terms/
parent: llm_obs
identifier: llm_obs_terms
weight: 1
- name: Quickstart
url: llm_observability/quickstart/
parent: llm_obs
identifier: llm_obs_quickstart
weight: 2
- name: Setup
url: llm_observability/setup/
- name: Instrumentation
url: llm_observability/instrumentation/
parent: llm_obs
identifier: llm_obs_setup
identifier: llm_obs_instrumentation
weight: 3
- name: LLM Obs SDK
url: llm_observability/sdk
parent: llm_obs
parent: llm_obs_instrumentation
identifier: llm_obs_sdk
weight: 4
- name: Auto Instrumentation
url: llm_observability/sdk/auto_instrumentation
parent: llm_obs_sdk
identifier: llm_obs_setup_auto_instrumentation
weight: 401
weight: 301
- name: LLM Obs API
url: llm_observability/setup/api/
parent: llm_obs
parent: llm_obs_instrumentation
identifier: llm_obs_api
weight: 6
weight: 302
- name: Monitoring
url: llm_observability/monitoring
parent: llm_obs
Expand Down Expand Up @@ -4684,6 +4674,11 @@ menu:
parent: llm_obs
identifier: llm_obs_experiments
weight: 9
- name: Terms and Concepts
url: llm_observability/terms/
parent: llm_obs
identifier: llm_obs_terms
weight: 1
- name: Guides
url: llm_observability/guide/
parent: llm_obs
Expand Down
103 changes: 25 additions & 78 deletions content/en/llm_observability/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,126 +6,70 @@ further_reading:
- link: '/llm_observability'
tag: 'Documentation'
text: 'Learn about LLM Observability'
- link: '/llm_observability/sdk'
tag: 'SDK'
text: 'SDK reference'
---

## Overview

This guide uses the LLM Observability SDKs for [Python][1] and [Node.js][2]. If your application is written in another language, you can create traces by calling the [API][8] instead.

## Setup

### Jupyter notebooks

To better understand LLM Observability terms and concepts, you can explore the examples in the [LLM Observability Jupyter Notebooks repository][12]. These notebooks provide a hands-on experience, and allow you to apply these concepts in real time.

## Trace an LLM application

To generate an LLM Observability trace, you can run a Python or Node.js script.

### Prerequisites

- LLM Observability requires a Datadog API key. For more information, see [the instructions for creating an API key][7].
- The following example script uses OpenAI, but you can modify it to use a different provider. To run the script as written, you need:
- An OpenAI API key stored in your environment as `OPENAI_API_KEY`. To create one, see [Account Setup][4] and [Set up your API key][6] in the official OpenAI documentation.
- The OpenAI Python library installed. See [Setting up Python][5] in the official OpenAI documentation for instructions.
- LLM Observability requires a Datadog API key if you don't have an Agent running. Find your API key [in the Datadog application](https://app.datadoghq.com/organization-settings/api-keys).

{{< tabs >}}
{{% tab "Python" %}}

1. Install the SDK and OpenAI packages:
1. Install the SDK:

```shell
pip install ddtrace
pip install openai
```

2. Create a script, which makes a single OpenAI call.

```python
import os
from openai import OpenAI

oai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

completion = oai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful customer assistant for a furniture store."},
{"role": "user", "content": "I'd like to buy a chair for my living room."},
],
)
```

3. Run the script with the following shell command. This sends a trace of the OpenAI call to Datadog.
2. Prefix your Python start command with `ddtrace-run`:

```shell
DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=onboarding-quickstart \
DD_API_KEY=<YOUR_DATADOG_API_KEY> DD_SITE=<YOUR_DD_SITE> \
DD_LLMOBS_AGENTLESS_ENABLED=1 ddtrace-run python quickstart.py
DD_LLMOBS_ENABLED=1 \
DD_LLMOBS_ML_APP=quickstart-app \
DD_API_KEY=<YOUR_DATADOG_API_KEY> \
DD_SITE={{< region-param key="dd_site" code="true" >}} \
DD_LLMOBS_AGENTLESS_ENABLED=1 \
ddtrace-run <your application command>
```

Replace `<YOUR_DATADOG_API_KEY>` with your Datadog API key, and replace `<YOUR_DD_SITE>` with your [Datadog site][2].
Replace `<YOUR_DATADOG_API_KEY>` with your Datadog API key.

For more information about required environment variables, see [the SDK documentation][1].

[1]: /llm_observability/setup/sdk/python/#command-line-setup
[2]: /getting_started/site/
{{% /tab %}}

{{% tab "Node.js" %}}
1. Install the SDK and OpenAI packages:
1. Install the SDK:

```shell
npm install dd-trace
npm install openai
```
2. Create a script, which makes a single OpenAI call.

```javascript
const { OpenAI } = require('openai');

const oaiClient = new OpenAI(process.env.OPENAI_API_KEY);

function main () {
const completion = await oaiClient.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a helpful customer assistant for a furniture store.' },
{ role: 'user', content: 'I\'d like to buy a chair for my living room.' },
]
});
}

main();
```

3. Run the script with the following shell command. This sends a trace of the OpenAI call to Datadog.
2. Add `NODE_OPTIONS` your Node.js start command:
```shell
DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=onboarding-quickstart \
DD_API_KEY=<YOUR_DATADOG_API_KEY> DD_SITE=<YOUR_DD_SITE> \
DD_LLMOBS_AGENTLESS_ENABLED=1 NODE_OPTIONS="--import dd-trace/initialize.mjs" node quickstart.js
DD_LLMOBS_ENABLED=1 \
DD_LLMOBS_ML_APP=quickstart-app \
DD_API_KEY=<YOUR_DATADOG_API_KEY> \
DD_SITE={{< region-param key="dd_site" code="true" >}} \
DD_LLMOBS_AGENTLESS_ENABLED=1 \
NODE_OPTIONS="--import dd-trace/initialize.mjs" <your application command>
```

Replace `<YOUR_DATADOG_API_KEY>` with your Datadog API key, and replace `<YOUR_DD_SITE>` with your [Datadog site][2].

For more information about required environment variables, see [the SDK documentation][1].
Replace `<YOUR_DATADOG_API_KEY>` with your Datadog API key.

[1]: /llm_observability/setup/sdk/nodejs/#command-line-setup
[2]: /getting_started/site/

{{% /tab %}}
{{< /tabs >}}

**Note**: `DD_LLMOBS_AGENTLESS_ENABLED` is only required if you do not have the Datadog Agent running. If the Agent is running in your production environment, make sure this environment variable is unset.

4. View the trace of your LLM call on the **Traces** tab [of the **LLM Observability** page][3] in Datadog.

{{< img src="llm_observability/quickstart_trace_1.png" alt="An LLM Observability trace displaying a single LLM request" style="width:100%;" >}}

The trace you see is composed of a single LLM span. The `ddtrace-run` or `NODE_OPTIONS="--import dd-trace/initialize.mjs"` command automatically traces your LLM calls from [Datadog's list of supported integrations][10].

If your application consists of more elaborate prompting or complex chains or workflows involving LLMs, you can trace it using the [Setup documentation][11] and the [SDK documentation][1].

## Trace an LLM application in AWS Lambda
The following steps generate an LLM Observability trace in an AWS Lambda environment and create an Amazon Bedrock based chatbot running with LLM Observability in AWS Lambda.

Expand Down Expand Up @@ -195,6 +139,9 @@ export const handler = async (event) => {
{{% /tab %}}
{{< /tabs >}}

3. Make requests to your application triggering LLM calls and then view traces in the **Traces** tab [of the **LLM Observability** page][3] in Datadog. If you don't see any traces, make sure you are using a supported library else you may need to instrument your application's LLM calls manually.


## Further Reading

{{< partial name="whats-next/whats-next.html" >}}
Expand Down
41 changes: 31 additions & 10 deletions content/en/llm_observability/sdk/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,26 +6,39 @@ aliases:
- /llm_observability/setup/sdk/python
- /llm_observability/setup/sdk/nodejs
- /llm_observability/setup/sdk
- /llm_observability/instrumentation
---

## Overview

Datadog's LLM Observability SDK for enhances the observability of your LLM applications. The SDK supports Python versions 3.7 and newer.
Datadog's LLM Observability SDK provides automatic instrumentation as well as manual APIs that give you deep and rich insights into the cost, health and quality of your LLM applications.

### Supported runtimes

| Runtime | Version |
| ------- | ------- |
| Python | 3.7+ |
| Node.js | 16+ |

For information about LLM Observability's integration support, see [Auto Instrumentation][13].

You can install and configure tracing of various operations such as workflows, tasks, and API calls with function decorators or context managers. You can also annotate these traces with metadata for deeper insights into the performance and behavior of your applications, supporting multiple LLM services or models from the same environment.

For usage examples you can run from a Jupyter notebook, see the [LLM Observability Jupyter Notebooks repository][10].
## Installation

## Enabling

## Configuration

## Automatic Instrumentation

## Manual Instrumentation

### Capturing LLM Operations

#### Distributed workflows

## Setup
#### Annotating LLM Operations

#### Tagging User Sessions

### Submitting Evaluations

### Modifying Operation Input and Output

### Prerequisites

Expand Down Expand Up @@ -841,7 +854,7 @@ getRelevantDocs = llmobs.wrap({ kind: 'retrieval' }, getRelevantDocs)

## Tracking user sessions

Session tracking allows you to associate multiple interactions with a given user.
Session tracking allows you to associate multiple interactions with a given user.

{{< tabs >}}
{{% tab "Python" %}}
Expand Down Expand Up @@ -1635,6 +1648,14 @@ tracer.use('http', false) // disable the http integration
{{< /tabs >}}


### Supported runtimes

| Runtime | Version |
| ------- | ------- |
| Python | 3.7+ |
| Node.js | 16+ |


[1]: https://github.com/openai/openai-python
[2]: https://boto3.amazonaws.com/v1/documentation/api/latest/index.html
[3]: https://botocore.amazonaws.com/v1/documentation/api/latest/tutorial/index.html
Expand Down
Loading
Loading