Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: update reasoning guides with middleware #4672

Merged
merged 5 commits into from
Feb 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 48 additions & 5 deletions content/docs/02-guides/04-r1.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -59,20 +59,63 @@ The unified interface also means that you can easily switch between providers by

```ts
import { fireworks } from '@ai-sdk/fireworks';
import { generateText } from 'ai';
import {
generateText,
wrapLanguageModel,
extractReasoningMiddleware,
} from 'ai';

// middleware to extract reasoning tokens
const enhancedModel = wrapLanguageModel({
model: fireworks('accounts/fireworks/models/deepseek-r1'),
middleware: extractReasoningMiddleware({ tagName: 'think' }),
});

const { reasoning, text } = await generateText({
model: fireworks('accounts/fireworks/models/deepseek-r1'),
model: enhancedModel,
prompt: 'Explain quantum entanglement.',
});
```

Or to use Groq's `deepseek-r1-distill-llama-70b` model:

```ts
import { groq } from '@ai-sdk/groq';
import {
generateText,
wrapLanguageModel,
extractReasoningMiddleware,
} from 'ai';

// middleware to extract reasoning tokens
const enhancedModel = wrapLanguageModel({
model: groq('deepseek-r1-distill-llama-70b'),
middleware: extractReasoningMiddleware({ tagName: 'think' }),
});

const { reasoning, text } = await generateText({
model: enhancedModel,
prompt: 'Explain quantum entanglement.',
});
```

<Note>
Groq also provides a distilled version of DeepSeek R1 which is optimized for
performance. To learn more about using it with the AI SDK, check out the [Groq
provider](/providers/ai-sdk-providers/groq#reasoning-models).
The AI SDK provides a [middleware](/docs/ai-sdk-core/middleware)
(`extractReasoningMiddleware`) that can be used to extract the reasoning
tokens from the model's output.
</Note>

### Model Provider Comparison

You can use DeepSeek R1 with the AI SDK through various providers. Here's a comparison of the providers that support DeepSeek R1:

| Provider | Model ID | Reasoning Tokens |
| -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | ------------------- |
| [DeepSeek](/providers/ai-sdk-providers/deepseek) | [`deepseek-reasoner`](https://api-docs.deepseek.com/guides/reasoning_model) | <Check size={18} /> |
| [Fireworks](/providers/ai-sdk-providers/fireworks) | [`accounts/fireworks/models/deepseek-r1`](https://fireworks.ai/models/fireworks/deepseek-r1) | Requires Middleware |
| [Groq](/providers/ai-sdk-providers/groq) | [`deepseek-r1-distill-llama-70b`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | Requires Middleware |
| [Azure](/providers/ai-sdk-providers/azure) | [`DeepSeek-R1`](https://ai.azure.com/explore/models/DeepSeek-R1/version/1/registry/azureml-deepseek#code-samples) | Requires Middleware |

### Building Interactive Interfaces

AI SDK Core can be paired with [AI SDK UI](/docs/ai-sdk-ui/overview), another powerful component of the AI SDK, to streamline the process of building chat, completion, and assistant interfaces with popular frameworks like Next.js, Nuxt, SvelteKit, and SolidStart.
Expand Down
17 changes: 17 additions & 0 deletions content/providers/01-ai-sdk-providers/02-azure.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,23 @@ const model = azure('your-deployment-name');

You need to pass your deployment name as the first argument.

### Reasoning Models

Azure exposes the thinking of `DeepSeek-R1` in the generated text using the `<think>` tag.
You can use the `extractReasoningMiddleware` to extract this reasoning and expose it as a `reasoning` property on the result:

```ts
import { azure } from '@ai-sdk/azure';
import { wrapLanguageModel, extractReasoningMiddleware } from 'ai';

const enhancedModel = wrapLanguageModel({
model: azure('your-deepseek-r1-deployment-name'),
middleware: extractReasoningMiddleware({ tagName: 'think' }),
});
```

You can then use that enhanced model in functions like `generateText` and `streamText`.

### Example

You can use OpenAI language models to generate text with the `generateText` function:
Expand Down
17 changes: 17 additions & 0 deletions content/providers/01-ai-sdk-providers/26-fireworks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,23 @@ The first argument is the model id, e.g. `accounts/fireworks/models/firefunction
const model = fireworks('accounts/fireworks/models/firefunction-v1');
```

### Reasoning Models

Fireworks exposes the thinking of `deepseek-r1` in the generated text using the `<think>` tag.
You can use the `extractReasoningMiddleware` to extract this reasoning and expose it as a `reasoning` property on the result:

```ts
import { fireworks } from '@ai-sdk/fireworks';
import { wrapLanguageModel, extractReasoningMiddleware } from 'ai';

const enhancedModel = wrapLanguageModel({
model: fireworks('accounts/fireworks/models/deepseek-r1'),
middleware: extractReasoningMiddleware({ tagName: 'think' }),
});
```

You can then use that enhanced model in functions like `generateText` and `streamText`.

### Example

You can use Fireworks language models to generate text with the `generateText` function:
Expand Down
Loading