Skip to content

[llmobs] Make quickstart actually quick #30472

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 13 commits into
base: master
Choose a base branch
from
197 changes: 69 additions & 128 deletions content/en/llm_observability/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,193 +8,134 @@
text: 'Learn about LLM Observability'
---

## Overview

This guide uses the LLM Observability SDKs for [Python][1] and [Node.js][2]. If your application is written in another language, you can create traces by calling the [API][8] instead.

## Setup

### Jupyter notebooks

To better understand LLM Observability terms and concepts, you can explore the examples in the [LLM Observability Jupyter Notebooks repository][12]. These notebooks provide a hands-on experience, and allow you to apply these concepts in real time.

## Trace an LLM application

To generate an LLM Observability trace, you can run a Python or Node.js script.

### Prerequisites

- LLM Observability requires a Datadog API key. For more information, see [the instructions for creating an API key][7].
- The following example script uses OpenAI, but you can modify it to use a different provider. To run the script as written, you need:
- An OpenAI API key stored in your environment as `OPENAI_API_KEY`. To create one, see [Account Setup][4] and [Set up your API key][6] in the official OpenAI documentation.
- The OpenAI Python library installed. See [Setting up Python][5] in the official OpenAI documentation for instructions.
LLM Observability requires a Datadog API key if you don't have a Datadog Agent running. Find your API key [in Datadog](https://app.datadoghq.com/organization-settings/api-keys).

### Setup

{{< tabs >}}
{{% tab "Python" %}}

1. Install the SDK and OpenAI packages:
1. Install the SDK:

```shell
pip install ddtrace
pip install openai
```

2. Create a script, which makes a single OpenAI call.

```python
import os
from openai import OpenAI

oai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

completion = oai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful customer assistant for a furniture store."},
{"role": "user", "content": "I'd like to buy a chair for my living room."},
],
)
```

3. Run the script with the following shell command. This sends a trace of the OpenAI call to Datadog.
2. Prefix your Python start command with `ddtrace-run`:

```shell
DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=onboarding-quickstart \
DD_API_KEY=<YOUR_DATADOG_API_KEY> DD_SITE=<YOUR_DD_SITE> \
DD_LLMOBS_AGENTLESS_ENABLED=1 ddtrace-run python quickstart.py
DD_LLMOBS_ENABLED=1 \
DD_LLMOBS_ML_APP=quickstart-app \
DD_API_KEY=<YOUR_DATADOG_API_KEY> \
ddtrace-run <your application command>
```

Replace `<YOUR_DATADOG_API_KEY>` with your Datadog API key, and replace `<YOUR_DD_SITE>` with your [Datadog site][2].
Replace `<YOUR_DATADOG_API_KEY>` with your Datadog API key.

For more information about required environment variables, see [the SDK documentation][1].

[1]: /llm_observability/setup/sdk/python/#command-line-setup
[2]: /getting_started/site/
{{% /tab %}}

{{% tab "Node.js" %}}
1. Install the SDK and OpenAI packages:
1. Install the SDK:

```shell
npm install dd-trace
npm install openai
```
2. Create a script, which makes a single OpenAI call.

```javascript
const { OpenAI } = require('openai');

const oaiClient = new OpenAI(process.env.OPENAI_API_KEY);

function main () {
const completion = await oaiClient.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a helpful customer assistant for a furniture store.' },
{ role: 'user', content: 'I\'d like to buy a chair for my living room.' },
]
});
}

main();
```

3. Run the script with the following shell command. This sends a trace of the OpenAI call to Datadog.
2. Add `NODE_OPTIONS` to your Node.js start command:
```shell
DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=onboarding-quickstart \
DD_API_KEY=<YOUR_DATADOG_API_KEY> DD_SITE=<YOUR_DD_SITE> \
DD_LLMOBS_AGENTLESS_ENABLED=1 NODE_OPTIONS="--import dd-trace/initialize.mjs" node quickstart.js
DD_LLMOBS_ENABLED=1 \
DD_LLMOBS_ML_APP=quickstart-app \
DD_API_KEY=<YOUR_DATADOG_API_KEY> \
NODE_OPTIONS="--import dd-trace/initialize.mjs" <your application command>
```

Replace `<YOUR_DATADOG_API_KEY>` with your Datadog API key, and replace `<YOUR_DD_SITE>` with your [Datadog site][2].

For more information about required environment variables, see [the SDK documentation][1].
Replace `<YOUR_DATADOG_API_KEY>` with your Datadog API key.

[1]: /llm_observability/setup/sdk/nodejs/#command-line-setup
[2]: /getting_started/site/

{{% /tab %}}
{{< /tabs >}}

**Note**: `DD_LLMOBS_AGENTLESS_ENABLED` is only required if you do not have the Datadog Agent running. If the Agent is running in your production environment, make sure this environment variable is unset.
### View traces
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 nice - i like this change!


4. View the trace of your LLM call on the **Traces** tab [of the **LLM Observability** page][3] in Datadog.
Make requests to your application triggering LLM calls and then view traces in the **Traces** tab [of the **LLM Observability** page][3] in Datadog. If you don't see any traces, make sure you are using a supported library. Otherwise, you may need to instrument your application's LLM calls manually.

{{< img src="llm_observability/quickstart_trace_1.png" alt="An LLM Observability trace displaying a single LLM request" style="width:100%;" >}}

The trace you see is composed of a single LLM span. The `ddtrace-run` or `NODE_OPTIONS="--import dd-trace/initialize.mjs"` command automatically traces your LLM calls from [Datadog's list of supported integrations][10].
## Example "Hello World" application

Check warning on line 71 in content/en/llm_observability/quickstart.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.headings

'Example "Hello World" application' should use sentence-style capitalization.

If your application consists of more elaborate prompting or complex chains or workflows involving LLMs, you can trace it using the [Setup documentation][11] and the [SDK documentation][1].
See below for a simple application that can be used to begin exploring the LLM Observability product.

Check warning on line 73 in content/en/llm_observability/quickstart.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.words

Use '' instead of 'simple'.

## Trace an LLM application in AWS Lambda
The following steps generate an LLM Observability trace in an AWS Lambda environment and create an Amazon Bedrock based chatbot running with LLM Observability in AWS Lambda.

1. Create a [Lambda function chatbot using Amazon Bedrock][13].
2. Instrument your Lambda function:
1. Open a Cloudshell
2. Install the Datadog CLI client
```shell
npm install -g @datadog/datadog-ci
```
3. Set the Datadog API key and site
```shell
export DD_SITE=<YOUR_DD_SITE>
export DD_API_KEY=<YOUR_DATADOG_API_KEY>
```
If you already have or prefer to use a secret in Secrets Manager, you can set the API key by using the secret ARN:
```shell
export DATADOG_API_KEY_SECRET_ARN=<DATADOG_API_KEY_SECRET_ARN>
```
4. Instrument your Lambda function with LLM Observability (this requires at least version 77 of the Datadog Extension layer).
{{< tabs >}}
{{% tab "Python" %}}
```shell
datadog-ci lambda instrument -f <YOUR_LAMBDA_FUNCTION_NAME> -r <AWS_REGION> -v {{< latest-lambda-layer-version layer="python" >}} -e {{< latest-lambda-layer-version layer="extension" >}} --llmobs <YOUR_LLMOBS_ML_APP>
```
{{% /tab %}}
{{% tab "Node.js" %}}
```shell
datadog-ci lambda instrument -f <YOUR_LAMBDA_FUNCTION_NAME> -r <AWS_REGION> -v {{< latest-lambda-layer-version layer="node" >}} -e {{< latest-lambda-layer-version layer="extension" >}} --llmobs <YOUR_LLMOBS_ML_APP>
```
{{% /tab %}}
{{< /tabs >}}
3. Verify that your function was instrumented.
1. In the Datadog UI, navigate to `Infrastructure > Serverless`
2. Search for the name of your function.
3. Click on it to open the details panel.
4. Under the `Configuration` tab are the details of the Lambda function, attached layers, and a list of `DD_` Datadog-related environment variables under the `Datadog Environment Variables` section.
4. Invoke your Lambda function and verify that LLM Observability traces are visible in the Datadog UI.

### Force flushing traces
1. Install OpenAI with `pip install openai`.

For either serverless environments other than AWS Lambda or issues seeing traces from AWS Lambdas, use the `flush` method to ensure traces are flushed before the process exits.
2. Save example script `app.py`.

{{< tabs >}}
{{% tab "Python" %}}
```python
import os
from openai import OpenAI

oai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
completion = oai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful customer assistant for a furniture store."},
{"role": "user", "content": "I'd like to buy a chair for my living room."},
],
)
```

```python
from ddtrace.llmobs import LLMObs
def handler():
# function body
LLMObs.flush()
```
3. Run the application:

```shell
# Make sure you have the required environment variables listed above
DD_...= \
ddtrace-run app.py
```
{{% /tab %}}

{{% tab "Node.js" %}}
1. Install OpenAI `npm install openai`.

```javascript
import tracer from 'dd-trace';
const llmobs = tracer.llmobs;
2. Save example script `app.js`

export const handler = async (event) => {
// your function body
llmobs.flush();
};
```
```js
const { OpenAI } = require('openai');
const oaiClient = new OpenAI(process.env.OPENAI_API_KEY);

async function main () {
const completion = await oaiClient.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{ role: 'system', content: 'You are a helpful customer assistant for a furniture store.' },
{ role: 'user', content: 'I\'d like to buy a chair for my living room.' },
]
});
return completion;
}

main().then(console.log)

3. Run the application:

```
# Make sure you have the required environment variables listed above
DD_...= \
NODE_OPTIONS="--import dd-trace/initialize.mjs" node app.js
```
{{% /tab %}}
{{< /tabs >}}


## Further Reading

{{< partial name="whats-next/whats-next.html" >}}
Expand Down
Loading