Conversation
Set up package.json, tsconfig, Vite, and Vitest configs for the new Groq provider.
Introduce createGroqClient, getGroqApiKeyFromEnv, and generateId helpers. Add transformNullsToUndefined to normalize Groq responses. Add makeGroqStructuredOutputCompatible to adapt JSON schemas for Groq structured output.
…ation, and AG-UI streaming events including tool calls.
…llation, usage, supported models, and features.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant Adapter as GroqTextAdapter
participant Converter as Message Converter
participant GroqSDK as Groq SDK
participant Processor as Stream Processor
Client->>Adapter: chatStream(TextOptions)
activate Adapter
Adapter->>Converter: mapTextOptionsToGroq / convertMessageToGroq
Converter-->>Adapter: Groq request payload
Adapter->>GroqSDK: chat.completions.create(streaming=true)
GroqSDK-->>Adapter: AsyncIterable[chunk]
Adapter->>Processor: processGroqStreamChunks(chunk)
loop per chunk
Processor->>Processor: accumulate text, detect tool calls
Processor-->>Adapter: emit StreamChunk events (RUN_STARTED, TEXT_MESSAGE_*, TOOL_CALL_*, RUN_FINISHED)
end
Adapter-->>Client: yield StreamChunk events
deactivate Adapter
sequenceDiagram
actor Client
participant Adapter as GroqTextAdapter
participant SchemaConv as Schema Converter
participant GroqSDK as Groq SDK
participant Parser as JSON Parser
Client->>Adapter: structuredOutput(schema)
activate Adapter
Adapter->>SchemaConv: makeGroqStructuredOutputCompatible(schema)
SchemaConv-->>Adapter: Groq-compatible schema
Adapter->>GroqSDK: chat.completions.create(response_format=json_schema)
GroqSDK-->>Adapter: response with JSON text
Adapter->>Parser: JSON.parse + transformNullsToUndefined
Parser-->>Adapter: parsed data
Adapter-->>Client: StructuredOutputResult { data, rawText }
deactivate Adapter
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (9)
packages/typescript/ai-groq/src/model-meta.ts (1)
236-262: Minor: Inconsistent numeric separator inmax_completion_tokens.Line 239 uses
65536while other model definitions use underscore separators (e.g.,65_536on Line 209,32_768on Line 40). Use65_536for consistency.Proposed fix
- max_completion_tokens: 65536, + max_completion_tokens: 65_536,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/src/model-meta.ts` around lines 236 - 262, The numeric separator in the GPT_OSS_20B model definition is inconsistent: change the value of max_completion_tokens in the GPT_OSS_20B object (symbol: GPT_OSS_20B, property: max_completion_tokens) from 65536 to use an underscore for readability (65_536) to match the project's numeric style used elsewhere.packages/typescript/ai-groq/src/utils/schema-converter.ts (1)
12-33:transformNullsToUndefinedsilently drops keys with null values rather than setting them toundefined.Lines 25–27 skip keys whose transformed value is
undefined, effectively removing them from the output object. The function name and docstring suggest null→undefined conversion, but the actual behavior is null→key removal. If downstream consumers check'key' in objor iterateObject.keys(), they'll get different results than expected.If key removal is intentional (which seems likely for Zod
.optional()compatibility), consider clarifying the name/docs to reflect this. If consumers need the key present with anundefinedvalue, theifguard should be removed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/src/utils/schema-converter.ts` around lines 12 - 33, The function transformNullsToUndefined currently drops object keys whose values become undefined (due to the `if (transformed !== undefined)` guard) instead of assigning undefined — update transformNullsToUndefined so it preserves keys by removing that guard and always assigning result[key] = transformed (so null -> undefined but key remains), or if key removal was intended instead rename the function and docstring to reflect "removeNullKeys" (or similar) and keep the guard; adjust references/tests accordingly to match the chosen behavior.packages/typescript/ai-groq/package.json (2)
41-51: All dependency versions exist; some are outdated.All specified versions are valid published releases. However,
@vitest/coverage-v8is pinned at4.0.14while the latest is4.0.18, andzodcould be updated from the^4.0.0baseline to its latest4.3.6within the same major version. The exact pin on@vitest/coverage-v8is intentional and standard for devDependencies. Consider updating these to their latest available versions for improved stability and bug fixes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/package.json` around lines 41 - 51, Update the pinned devDependency and peerDependency versions in package.json: bump "@vitest/coverage-v8" in devDependencies from "4.0.14" to "4.0.18" and relax/update the "zod" entry in peerDependencies from "^4.0.0" to "^4.3.6" (keeping the same major range). Edit the devDependencies["@vitest/coverage-v8"] and peerDependencies["zod"] entries accordingly and run your package manager (install) to refresh lockfiles.
15-20: Add/adapterssubpath export for tree-shaking.The package.json should export adapters via a subpath to enable proper tree-shaking. The adapter implementations exist in
src/adapters/and are re-exported from the main index, but adding a package.json subpath export enables bundlers to properly tree-shake unused adapters.Proposed fix
"exports": { ".": { "types": "./dist/esm/index.d.ts", "import": "./dist/esm/index.js" + }, + "./adapters": { + "types": "./dist/esm/adapters/index.d.ts", + "import": "./dist/esm/adapters/index.js" } },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/package.json` around lines 15 - 20, Update the package.json "exports" to add a subpath for adapters so bundlers can tree-shake them: under the existing "exports" object add an entry for "./adapters/*" mapping to the built ESM adapter files (e.g. "import": "./dist/esm/adapters/*.js", "types": "./dist/esm/adapters/*.d.ts") and optionally a "./adapters" entry mapping to "./dist/esm/adapters/index.js" and its types; this ensures the runtime import paths for modules in src/adapters/ (re-exported by the main index) resolve to discrete files in dist for tree-shaking.packages/typescript/ai-groq/vitest.config.ts (1)
1-22: Duplicate test configuration withvite.config.ts.This file duplicates the test configuration already present in
vite.config.ts. When both files exist,vitest.config.tstakes precedence, effectively overriding the test config invite.config.ts. Any future test configuration changes must be applied in both places to keep them in sync.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/vitest.config.ts` around lines 1 - 22, The vitest config in this file duplicates the test configuration (the export default defineConfig({ test: { ... } })) already defined in vite.config.ts; remove the duplication by either deleting this vitest.config.ts entirely or changing it to re-export/forward the test config from the existing vite.config export so only one source of truth exists (i.e., ensure the exported defineConfig uses the test config from vite.config rather than redefining test settings here).packages/typescript/ai-groq/src/tools/tool-converter.ts (1)
9-15: Nit: simplify the map callback.The wrapper function is redundant around the single call.
♻️ Proposed simplification
export function convertToolsToProviderFormat( tools: Array<Tool>, ): Array<FunctionTool> { - return tools.map((tool) => { - return convertFunctionToolToAdapterFormat(tool) - }) + return tools.map(convertFunctionToolToAdapterFormat) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/src/tools/tool-converter.ts` around lines 9 - 15, The map callback in convertToolsToProviderFormat is redundant; replace the inline arrow wrapper with a direct reference to the converter function. Update convertToolsToProviderFormat to return tools.map(convertFunctionToolToAdapterFormat) so it directly maps Array<Tool> to Array<FunctionTool> using convertFunctionToolToAdapterFormat.packages/typescript/ai-groq/src/adapters/text.ts (2)
503-506:parts as anytype escape hides type mismatches.The
as anycast on thecontentfield bypasses type checking. If the Groq SDK'sChatCompletionMessageParamtype for user messages actually expects a different shape, this won't be caught at compile time. Consider defining a more precise type or using the SDK's content part types directly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/src/adapters/text.ts` around lines 503 - 506, The return currently masks type errors by using "parts as any" for the content field; instead remove the any cast and convert or type-assert parts to the SDK's expected shape (e.g., import and use ChatCompletionMessageParam or its content member) or map/transform the parts array into the exact content structure expected by the Groq SDK. Locate the return in text.ts where you build the object with role: 'user' and variable parts, then either cast to the precise type (e.g., content: parts as ChatCompletionMessageParam['content']) or explicitly map parts into the correct shape before returning to preserve compile-time type safety.
70-123: Stream errors inprocessGroqStreamChunksare caught but not re-yielded to the outer generator.The
catchblock insideprocessGroqStreamChunks(Lines 367-381) yields aRUN_ERRORevent and then silently completes. However, the outerchatStreammethod (Lines 94-122) also has acatchthat would only trigger ifclient.chat.completions.create()itself throws (pre-stream errors). Mid-stream errors are correctly handled by the inner catch. This is fine structurally.However, note that if
processGroqStreamChunksyieldsRUN_ERRORand then the stream finishes, the outeryield*will complete normally — so the outercatchwon't fire. This means there are two separateRUN_ERROR-emitting paths that could theoretically both fire if the SDK throws during creation AND during iteration. Consider consolidating error handling.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/src/adapters/text.ts` around lines 70 - 123, The issue: mid-stream errors emitted as RUN_ERROR inside processGroqStreamChunks are not propagated to chatStream, causing duplicated or inconsistent error paths; fix by, after processGroqStreamChunks catches and yields a RUN_ERROR (using aguiState.runId etc.), re-throw the error (or throw a new Error) so the outer chatStream's try/catch (around client.chat.completions.create and the yield*) can handle logging/unified handling; update processGroqStreamChunks to rethrow the caught error after yielding RUN_ERROR and ensure chatStream's catch remains ready to produce the same RUN_ERROR only for pre-stream creation failures (use symbols processGroqStreamChunks, chatStream, client.chat.completions.create, RUN_ERROR, and aguiState to locate the changes).packages/typescript/ai-groq/tests/groq-adapter.test.ts (1)
51-54: Consider addingparametersto the weather tool fixture.The
weatherToolonly hasnameanddescription. While this is sufficient for the current tests, it means the tool conversion path that handlesparameters/ JSON Schema is untested. Consider adding aparametersfield to improve coverage of the tool conversion logic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/tests/groq-adapter.test.ts` around lines 51 - 54, Add a JSON Schema "parameters" field to the weatherTool fixture so the tool-conversion path that handles parameters is exercised; update the weatherTool object (named weatherTool, type Tool) to include a parameters object with "type":"object", a "properties" map (e.g., location: {type: "string", description: "City or coordinates"}, units: {type: "string", enum: ["metric","imperial"]}) and a "required" array (e.g., ["location"]) to cover required/property handling in the conversion logic; keep the rest of the fixture the same so existing tests can still run.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-groq/src/adapters/text.ts`:
- Around line 308-338: The computedFinishReason currently collapses every
non-'tool_calls' finish reason into 'stop'; update the logic in the block that
defines computedFinishReason (referencing choice.finish_reason and
toolCallsInProgress) so that if the finish reason is 'tool_calls' or
toolCallsInProgress.size > 0 it yields 'tool_calls', otherwise it preserves the
original choice.finish_reason (falling back to 'stop' only if
choice.finish_reason is undefined). Implement this by replacing the ternary that
currently returns 'stop' with one that returns choice.finish_reason || 'stop'.
- Around line 118-121: Remove the debug console.* statements that leak internal
details and replace them with calls to the project logger or remove entirely;
specifically, in the chatStream error handling block (where the current
console.error('>>> chatStream: ...') calls exist) and in the structuredOutput
and processGroqStreamChunks functions, replace console.error/console.log debug
artifacts with a configurable logger (e.g., use an injected logger instance or a
centralized logger module) or remove the statements, ensuring sensitive error
objects are not printed to stdout/stderr in production and that any retained
logs use appropriate log levels and sanitized messages.
- Around line 137-185: Remove the redundant outer try-catch in structuredOutput
so the inner JSON.parse catch can propagate its descriptive error; specifically,
delete the outer "try { ... } catch (error)" block (and the console.error debug
logs) around the client.chat.completions.create call and subsequent parsing,
leaving the inner try/catch that throws a parsing Error intact; keep using
makeGroqStructuredOutputCompatible(outputSchema, outputSchema.required || [])
as-is since that helper populates required properties itself, and continue to
call transformNullsToUndefined on the parsed result before returning.
In `@packages/typescript/ai-groq/src/message-types.ts`:
- Around line 68-73: The ChatCompletionNamedToolChoice interface uses a
capitalized field name `Function` which doesn't match the Groq API (expects
`function`); update the `ChatCompletionNamedToolChoice` interface to rename the
property from `Function` to lowercase `function` so requests built by any code
using this type (e.g., when populating `tool_choice`) serialize with the correct
field name and fixed payloads to the Groq Chat Completions API.
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 308-321: GROQ_CHAT_MODELS is currently inferred as string[] so the
derived GroqChatModels union collapses to string; fix by making GROQ_CHAT_MODELS
a readonly literal tuple (append as const to the export) so typeof
GROQ_CHAT_MODELS[number] yields the union of model name literals, then ensure
any dependent type uses typeof GROQ_CHAT_MODELS[number] (check GroqChatModels,
ResolveProviderOptions, ResolveInputModalities,
GroqChatModelProviderOptionsByName) to pick up the new literal union.
In `@packages/typescript/ai-groq/src/utils/client.ts`:
- Around line 40-42: The generateId function can produce empty or very short
suffixes because Math.random().toString(36).substring(7) may return an empty
string; update generateId to produce a consistent, collision-resistant suffix by
using crypto.randomUUID() when available (e.g., return
`${prefix}-${Date.now()}-${crypto.randomUUID()}`) and falling back to a
fixed-length base36 fragment such as Math.random().toString(36).substring(2,10)
(e.g., `${prefix}-${Date.now()}-${Math.random().toString(36).substring(2,10)}`)
to ensure non-empty, consistent-length IDs; modify the generateId function
accordingly and handle environments lacking crypto.randomUUID() with the
fallback.
In `@packages/typescript/ai-groq/src/utils/schema-converter.ts`:
- Around line 57-86: The optional object/array branch in
makeGroqStructuredOutputCompatible currently recurses into nested object/array
types but skips adding "null" because the nullable handling is in an else-if;
update makeGroqStructuredOutputCompatible so that after handling the object
(prop.type === 'object') and array (prop.type === 'array') branches you also
apply the wasOptional null-union logic to prop (i.e., if wasOptional and
prop.type is a string replace with [prop.type, 'null'], or if an array and
doesn't include 'null' append 'null') — ensure you reference
properties[propName], prop.items for arrays, and preserve existing spread/merged
fields when adding the null union.
In `@packages/typescript/ai-groq/tests/groq-adapter.test.ts`:
- Around line 160-164: The current tests use conditional type-guard blocks like
if (chunks[0]?.type === 'RUN_STARTED') { ... } which silently skip assertions if
the type is wrong; change these to assert the type first (e.g.,
expect(chunks[0]?.type).toBe('RUN_STARTED')) and then run the property
assertions unguarded, or replace the if with a failing pattern (e.g., throw or
expect(...).toBe(...)) so the test fails when the type is not as expected;
update all occurrences that inspect chunks[n]?.type (examples: checks for
'RUN_STARTED', 'SOME_OTHER_EVENT', etc.) so the assertions on runId, model,
text, etc. are executed only after a definitive expect of chunks[index].type.
Ensure you update each guarded block at the noted occurrences so tests fail
loudly on regressions.
---
Nitpick comments:
In `@packages/typescript/ai-groq/package.json`:
- Around line 41-51: Update the pinned devDependency and peerDependency versions
in package.json: bump "@vitest/coverage-v8" in devDependencies from "4.0.14" to
"4.0.18" and relax/update the "zod" entry in peerDependencies from "^4.0.0" to
"^4.3.6" (keeping the same major range). Edit the
devDependencies["@vitest/coverage-v8"] and peerDependencies["zod"] entries
accordingly and run your package manager (install) to refresh lockfiles.
- Around line 15-20: Update the package.json "exports" to add a subpath for
adapters so bundlers can tree-shake them: under the existing "exports" object
add an entry for "./adapters/*" mapping to the built ESM adapter files (e.g.
"import": "./dist/esm/adapters/*.js", "types": "./dist/esm/adapters/*.d.ts") and
optionally a "./adapters" entry mapping to "./dist/esm/adapters/index.js" and
its types; this ensures the runtime import paths for modules in src/adapters/
(re-exported by the main index) resolve to discrete files in dist for
tree-shaking.
In `@packages/typescript/ai-groq/src/adapters/text.ts`:
- Around line 503-506: The return currently masks type errors by using "parts as
any" for the content field; instead remove the any cast and convert or
type-assert parts to the SDK's expected shape (e.g., import and use
ChatCompletionMessageParam or its content member) or map/transform the parts
array into the exact content structure expected by the Groq SDK. Locate the
return in text.ts where you build the object with role: 'user' and variable
parts, then either cast to the precise type (e.g., content: parts as
ChatCompletionMessageParam['content']) or explicitly map parts into the correct
shape before returning to preserve compile-time type safety.
- Around line 70-123: The issue: mid-stream errors emitted as RUN_ERROR inside
processGroqStreamChunks are not propagated to chatStream, causing duplicated or
inconsistent error paths; fix by, after processGroqStreamChunks catches and
yields a RUN_ERROR (using aguiState.runId etc.), re-throw the error (or throw a
new Error) so the outer chatStream's try/catch (around
client.chat.completions.create and the yield*) can handle logging/unified
handling; update processGroqStreamChunks to rethrow the caught error after
yielding RUN_ERROR and ensure chatStream's catch remains ready to produce the
same RUN_ERROR only for pre-stream creation failures (use symbols
processGroqStreamChunks, chatStream, client.chat.completions.create, RUN_ERROR,
and aguiState to locate the changes).
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 236-262: The numeric separator in the GPT_OSS_20B model definition
is inconsistent: change the value of max_completion_tokens in the GPT_OSS_20B
object (symbol: GPT_OSS_20B, property: max_completion_tokens) from 65536 to use
an underscore for readability (65_536) to match the project's numeric style used
elsewhere.
In `@packages/typescript/ai-groq/src/tools/tool-converter.ts`:
- Around line 9-15: The map callback in convertToolsToProviderFormat is
redundant; replace the inline arrow wrapper with a direct reference to the
converter function. Update convertToolsToProviderFormat to return
tools.map(convertFunctionToolToAdapterFormat) so it directly maps Array<Tool> to
Array<FunctionTool> using convertFunctionToolToAdapterFormat.
In `@packages/typescript/ai-groq/src/utils/schema-converter.ts`:
- Around line 12-33: The function transformNullsToUndefined currently drops
object keys whose values become undefined (due to the `if (transformed !==
undefined)` guard) instead of assigning undefined — update
transformNullsToUndefined so it preserves keys by removing that guard and always
assigning result[key] = transformed (so null -> undefined but key remains), or
if key removal was intended instead rename the function and docstring to reflect
"removeNullKeys" (or similar) and keep the guard; adjust references/tests
accordingly to match the chosen behavior.
In `@packages/typescript/ai-groq/tests/groq-adapter.test.ts`:
- Around line 51-54: Add a JSON Schema "parameters" field to the weatherTool
fixture so the tool-conversion path that handles parameters is exercised; update
the weatherTool object (named weatherTool, type Tool) to include a parameters
object with "type":"object", a "properties" map (e.g., location: {type:
"string", description: "City or coordinates"}, units: {type: "string", enum:
["metric","imperial"]}) and a "required" array (e.g., ["location"]) to cover
required/property handling in the conversion logic; keep the rest of the fixture
the same so existing tests can still run.
In `@packages/typescript/ai-groq/vitest.config.ts`:
- Around line 1-22: The vitest config in this file duplicates the test
configuration (the export default defineConfig({ test: { ... } })) already
defined in vite.config.ts; remove the duplication by either deleting this
vitest.config.ts entirely or changing it to re-export/forward the test config
from the existing vite.config export so only one source of truth exists (i.e.,
ensure the exported defineConfig uses the test config from vite.config rather
than redefining test settings here).
AlemTuzlak
left a comment
There was a problem hiding this comment.
Some questions, also could we use the sdk types to type somw of the provider options? To make them more accurate?
|
|
||
| const result = await generate({ | ||
| adapter, | ||
| model: 'llama-3.3-70b-versatile', |
…nContentPart` in Groq text adapter.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
packages/typescript/ai-groq/src/model-meta.ts (1)
331-344:GroqModelInputModalitiesByNamecould be derived from the model constants to avoid manual maintenance.The map manually duplicates every model's
supports.inputtype. Adding a new model requires updating this map separately fromGROQ_CHAT_MODELSand the model constant. A mapped type approach would keep this in sync automatically:♻️ Proposed optional refactor
-export type GroqModelInputModalitiesByName = { - [LLAMA_3_1_8B_INSTANT.name]: typeof LLAMA_3_1_8B_INSTANT.supports.input - [LLAMA_3_3_70B_VERSATILE.name]: typeof LLAMA_3_3_70B_VERSATILE.supports.input - [LLAMA_4_MAVERICK_17B_128E_INSTRUCT.name]: typeof LLAMA_4_MAVERICK_17B_128E_INSTRUCT.supports.input - [LLAMA_4_SCOUT_17B_16E_INSTRUCT.name]: typeof LLAMA_4_SCOUT_17B_16E_INSTRUCT.supports.input - [LLAMA_GUARD_4_12B.name]: typeof LLAMA_GUARD_4_12B.supports.input - [LLAMA_PROMPT_GUARD_2_86M.name]: typeof LLAMA_PROMPT_GUARD_2_86M.supports.input - [LLAMA_PROMPT_GUARD_2_22M.name]: typeof LLAMA_PROMPT_GUARD_2_22M.supports.input - [GPT_OSS_20B.name]: typeof GPT_OSS_20B.supports.input - [GPT_OSS_120B.name]: typeof GPT_OSS_120B.supports.input - [GPT_OSS_SAFEGUARD_20B.name]: typeof GPT_OSS_SAFEGUARD_20B.supports.input - [KIMI_K2_INSTRUCT_0905.name]: typeof KIMI_K2_INSTRUCT_0905.supports.input - [QWEN3_32B.name]: typeof QWEN3_32B.supports.input -}Define a lookup of all model objects and derive the map:
const GROQ_MODEL_DEFS = [ LLAMA_3_1_8B_INSTANT, LLAMA_3_3_70B_VERSATILE, LLAMA_4_MAVERICK_17B_128E_INSTRUCT, LLAMA_4_SCOUT_17B_16E_INSTRUCT, LLAMA_GUARD_4_12B, LLAMA_PROMPT_GUARD_2_86M, LLAMA_PROMPT_GUARD_2_22M, GPT_OSS_20B, GPT_OSS_120B, GPT_OSS_SAFEGUARD_20B, KIMI_K2_INSTRUCT_0905, QWEN3_32B, ] as const export type GroqModelInputModalitiesByName = { [M in (typeof GROQ_MODEL_DEFS)[number] as M['name']]: M['supports']['input'] }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/src/model-meta.ts` around lines 331 - 344, The GroqModelInputModalitiesByName type is manually duplicating each model's supports.input and will drift when models are added; fix by creating a single GROQ_MODEL_DEFS const array containing the model constants (e.g. LLAMA_3_1_8B_INSTANT, LLAMA_3_3_70B_VERSATILE, LLAMA_4_MAVERICK_17B_128E_INSTRUCT, etc.) and derive GroqModelInputModalitiesByName via a mapped type over (typeof GROQ_MODEL_DEFS)[number] using M['name'] as the key and M['supports']['input'] as the value so new model constants automatically flow into the type.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-groq/src/adapters/text.ts`:
- Around line 314-332: The loop over toolCallsInProgress is emitting
TOOL_CALL_END events for entries that never had a start (e.g., toolCall.started
=== false), producing orphaned events; update the loop in the iterator that
yields TOOL_CALL_END to skip any toolCall where toolCall.started is falsy or
missing required identifiers (toolCall.id or toolCall.name) — only parse
toolCall.arguments and yield the TOOL_CALL_END object when toolCall.started ===
true and both toolCall.id and toolCall.name are present, otherwise continue to
the next entry.
- Around line 442-451: The current mapping for tool messages uses
message.toolCallId || '' which produces an invalid empty tool_call_id; update
the handler that checks message.role === 'tool' to require a non-empty
message.toolCallId (the symbol toolCallId on the ModelMessage) and either throw
a clear error or skip/convert the message if toolCallId is missing, and then set
tool_call_id directly to message.toolCallId (no fallback). Also verify the
ModelMessage type for role: 'tool' to ensure toolCallId is not nullable/optional
and adjust types if needed so the compiler enforces presence.
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 77-95: The LLAMA_4_SCOUT_17B_16E_INSTRUCT ModelMeta incorrectly
lists only text inputs and omits the vision feature; update the supports block
on LLAMA_4_SCOUT_17B_16E_INSTRUCT to include input: ['text', 'image'] and add
'vision' to features (matching the meta-llama/llama-4-maverick-17b-128e-instruct
pattern) so ResolveInputModalities will allow image inputs for this
vision-capable model.
---
Duplicate comments:
In `@packages/typescript/ai-groq/src/adapters/text.ts`:
- Around line 149-185: Remove the redundant outer try-catch surrounding the call
to this.client.chat.completions.create in the function that returns
structuredOutput; the inner try-catch that parses JSON (which throws a
descriptive Error when JSON.parse fails) is sufficient, so delete the outer try
{ ... } catch (error) { ... } block and let errors from the create call or the
inner parser propagate naturally; keep the requestParams call to
this.client.chat.completions.create, the JSON.parse logic that sets parsed, the
transformNullsToUndefined(parsed) step, and the returned object { data:
transformed, rawText } intact.
- Around line 119-122: Remove the leftover debug console statements: replace the
console.error calls in the chatStream response creation block (the ones printing
'>>> chatStream: Fatal error during response creation <<<', error.message,
error.stack, and full error) and the similar '>>>' prints in the
structuredOutput handling, and the console.log inside processGroqStreamChunks
with the project's standardized logger or proper error handling; locate these by
the functions/methods named chatStream, structuredOutput, and
processGroqStreamChunks in text.ts and either remove the debug prints or call
the centralized logger (e.g., processLogger.error/processLogger.debug) and
include contextual messages and the error object rather than ad-hoc '>>>'
markers.
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 308-321: The previous type-collapse issue is fixed by adding the
as const assertion to GROQ_CHAT_MODELS so GroqChatModels now produces a proper
literal union; remove the duplicate review marker and/or the stale
duplicate_comment note from the PR so it doesn't block merging and ensure the
constant array (GROQ_CHAT_MODELS) and any dependent type aliases (e.g.,
GroqChatModels) remain unchanged.
---
Nitpick comments:
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 331-344: The GroqModelInputModalitiesByName type is manually
duplicating each model's supports.input and will drift when models are added;
fix by creating a single GROQ_MODEL_DEFS const array containing the model
constants (e.g. LLAMA_3_1_8B_INSTANT, LLAMA_3_3_70B_VERSATILE,
LLAMA_4_MAVERICK_17B_128E_INSTRUCT, etc.) and derive
GroqModelInputModalitiesByName via a mapped type over (typeof
GROQ_MODEL_DEFS)[number] using M['name'] as the key and M['supports']['input']
as the value so new model constants automatically flow into the type.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
packages/typescript/ai-groq/src/model-meta.ts (1)
357-360:ResolveProviderOptionsis a vacuous conditional — both branches returnGroqTextProviderOptions.Since
GroqChatModelProviderOptionsByNamemaps every known model toGroqTextProviderOptions, theextends keyofbranch and the fallback branch are identical today. The utility is correct but currently a no-op. Consider either adding a comment explaining the forward-compatibility intent, or simplifying totype ResolveProviderOptions<TModel extends string> = GroqTextProviderOptionsuntil models diverge in their option types.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/src/model-meta.ts` around lines 357 - 360, ResolveProviderOptions<TModel> is currently vacuous because GroqChatModelProviderOptionsByName maps every model to GroqTextProviderOptions; either simplify the alias to "type ResolveProviderOptions<TModel extends string> = GroqTextProviderOptions" or keep the generic form but add a clarifying comment above ResolveProviderOptions explaining it is forward-compatible and intentionally resolves to GroqTextProviderOptions today (reference ResolveProviderOptions, GroqChatModelProviderOptionsByName, and GroqTextProviderOptions when making the change).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 236-262: Update the GPT_OSS_20B ModelMeta: add 'tools' to the
supports.features array of GPT_OSS_20B and normalize the numeric literal for
max_completion_tokens from 65536 to 65_536 so it matches GPT_OSS_SAFEGUARD_20B
and GPT_OSS_120B; modify the object named GPT_OSS_20B (properties
supports.features and max_completion_tokens) accordingly.
- Around line 277-282: The model metadata for KIMI_K2_INSTRUCT_0905 currently
lists features without 'tools', which blocks tool/function calling; update the
supports.features array for the KIMI_K2_INSTRUCT_0905 model entry to include the
string 'tools' (i.e., ['streaming', 'json_object', 'json_schema', 'tools']) so
adapter capability checks allow tool_calls; locate the KIMI_K2_INSTRUCT_0905
object in model-meta.ts and modify its supports.features accordingly.
---
Duplicate comments:
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 89-94: Update the supports.features for the
LLAMA_4_SCOUT_17B_16E_INSTRUCT model to include 'vision' so adapters detect its
vision capability; locate the LLAMA_4_SCOUT_17B_16E_INSTRUCT entry in
model-meta.ts and add 'vision' to the supports.features array (making it
['streaming','tools','json_object','vision'] or match the Maverick entry
pattern) while keeping the existing input/output/endpoints values unchanged.
---
Nitpick comments:
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 357-360: ResolveProviderOptions<TModel> is currently vacuous
because GroqChatModelProviderOptionsByName maps every model to
GroqTextProviderOptions; either simplify the alias to "type
ResolveProviderOptions<TModel extends string> = GroqTextProviderOptions" or keep
the generic form but add a clarifying comment above ResolveProviderOptions
explaining it is forward-compatible and intentionally resolves to
GroqTextProviderOptions today (reference ResolveProviderOptions,
GroqChatModelProviderOptionsByName, and GroqTextProviderOptions when making the
change).
dhamivibez
left a comment
There was a problem hiding this comment.
I've removed the casts in the text adapter
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/typescript/ai-groq/src/model-meta.ts (1)
331-344:GroqModelInputModalitiesByNameis a manually-maintained map that can silently drift.If a future model entry is appended to
GROQ_CHAT_MODELSbut accidentally omitted from this type,ResolveInputModalitiessilently falls back toreadonly ['text']— a particularly dangerous miss for vision-capable models. A derived type eliminates the risk:♻️ Proposed refactor
-export type GroqModelInputModalitiesByName = { - [LLAMA_3_1_8B_INSTANT.name]: typeof LLAMA_3_1_8B_INSTANT.supports.input - [LLAMA_3_3_70B_VERSATILE.name]: typeof LLAMA_3_3_70B_VERSATILE.supports.input - [LLAMA_4_MAVERICK_17B_128E_INSTRUCT.name]: typeof LLAMA_4_MAVERICK_17B_128E_INSTRUCT.supports.input - [LLAMA_4_SCOUT_17B_16E_INSTRUCT.name]: typeof LLAMA_4_SCOUT_17B_16E_INSTRUCT.supports.input - [LLAMA_GUARD_4_12B.name]: typeof LLAMA_GUARD_4_12B.supports.input - [LLAMA_PROMPT_GUARD_2_86M.name]: typeof LLAMA_PROMPT_GUARD_2_86M.supports.input - [LLAMA_PROMPT_GUARD_2_22M.name]: typeof LLAMA_PROMPT_GUARD_2_22M.supports.input - [GPT_OSS_20B.name]: typeof GPT_OSS_20B.supports.input - [GPT_OSS_120B.name]: typeof GPT_OSS_120B.supports.input - [GPT_OSS_SAFEGUARD_20B.name]: typeof GPT_OSS_SAFEGUARD_20B.supports.input - [KIMI_K2_INSTRUCT_0905.name]: typeof KIMI_K2_INSTRUCT_0905.supports.input - [QWEN3_32B.name]: typeof QWEN3_32B.supports.input -}Introduce a single source of truth tuple and derive from it:
const ALL_GROQ_CHAT_MODELS = [ LLAMA_3_1_8B_INSTANT, LLAMA_3_3_70B_VERSATILE, LLAMA_4_MAVERICK_17B_128E_INSTRUCT, LLAMA_4_SCOUT_17B_16E_INSTRUCT, LLAMA_GUARD_4_12B, LLAMA_PROMPT_GUARD_2_86M, LLAMA_PROMPT_GUARD_2_22M, GPT_OSS_20B, GPT_OSS_120B, GPT_OSS_SAFEGUARD_20B, KIMI_K2_INSTRUCT_0905, QWEN3_32B, ] as const export const GROQ_CHAT_MODELS = ALL_GROQ_CHAT_MODELS.map((m) => m.name) as unknown as readonly [ // ... names inferred ] type AllGroqChatModels = (typeof ALL_GROQ_CHAT_MODELS)[number] export type GroqModelInputModalitiesByName = { [M in AllGroqChatModels as M['name']]: M['supports']['input'] }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/src/model-meta.ts` around lines 331 - 344, The current GroqModelInputModalitiesByName is a manually maintained map that can drift from GROQ_CHAT_MODELS; create a single source-of-truth tuple (e.g., ALL_GROQ_CHAT_MODELS = [LLAMA_3_1_8B_INSTANT, ..., QWEN3_32B] as const), derive GROQ_CHAT_MODELS from it (map to .name) and then replace the manual GroqModelInputModalitiesByName with a mapped/derived type: compute AllGroqChatModels = (typeof ALL_GROQ_CHAT_MODELS)[number] and use [M in AllGroqChatModels as M['name']]: M['supports']['input'] so new models added to ALL_GROQ_CHAT_MODELS automatically flow into GroqModelInputModalitiesByName (update references to GROQ_CHAT_MODELS, ALL_GROQ_CHAT_MODELS, and GroqModelInputModalitiesByName accordingly).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 285-303: Update the QWEN3_32B model metadata so its
supports.features includes 'json_object'; locate the QWEN3_32B constant in
model-meta.ts and change the features array on that object (currently
['streaming','tools','reasoning']) to
['streaming','tools','reasoning','json_object'] so the adapter will allow JSON
Object Mode for qwen/qwen3-32b.
---
Duplicate comments:
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 77-95: LLAMA_4_SCOUT_17B_16E_INSTRUCT has input:['text','image']
but its features array omits 'vision', causing capability checks to block image
paths; update the features array on the LLAMA_4_SCOUT_17B_16E_INSTRUCT constant
to include 'vision' (alongside existing 'streaming', 'tools', 'json_object') so
the model's metadata correctly advertises vision support to any code using
ModelMeta/ResolveInputModalities.
- Around line 236-262: Update the GPT_OSS_20B model meta: change
max_completion_tokens from the bare integer 65536 to the underscore-separated
numeric literal 65_536 to match the file style, and add 'tools' to the
supports.features array (alongside 'streaming', 'browser_search', etc.) so
GPT_OSS_20B matches the capabilities declared for GPT_OSS_120B and
GPT_OSS_SAFEGUARD_20B; edit the GPT_OSS_20B constant to make these two small
fixes.
---
Nitpick comments:
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 331-344: The current GroqModelInputModalitiesByName is a manually
maintained map that can drift from GROQ_CHAT_MODELS; create a single
source-of-truth tuple (e.g., ALL_GROQ_CHAT_MODELS = [LLAMA_3_1_8B_INSTANT, ...,
QWEN3_32B] as const), derive GROQ_CHAT_MODELS from it (map to .name) and then
replace the manual GroqModelInputModalitiesByName with a mapped/derived type:
compute AllGroqChatModels = (typeof ALL_GROQ_CHAT_MODELS)[number] and use [M in
AllGroqChatModels as M['name']]: M['supports']['input'] so new models added to
ALL_GROQ_CHAT_MODELS automatically flow into GroqModelInputModalitiesByName
(update references to GROQ_CHAT_MODELS, ALL_GROQ_CHAT_MODELS, and
GroqModelInputModalitiesByName accordingly).
There was a problem hiding this comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 298-303: The QWEN3_32B model entry is missing the 'json_object'
capability in its supports.features array; update the model metadata for
QWEN3_32B (the supports object where input/output/endpoints/features are
defined) to include 'json_object' in the features list so JSON Object Mode
requests are allowed by the adapter.
- Around line 89-94: Update the supports.features for the
LLAMA_4_SCOUT_17B_16E_INSTRUCT model to reflect its true capabilities: add
'vision' (to match input: ['text','image']) and add 'json_schema' so structured
outputs are enabled; locate the LLAMA_4_SCOUT_17B_16E_INSTRUCT entry and modify
its supports.features array to include both 'vision' and 'json_schema' alongside
the existing items (e.g., ensure supports.features contains
['streaming','tools','json_object','vision','json_schema'] or equivalent
ordering).
AlemTuzlak
left a comment
There was a problem hiding this comment.
Will try to test this out manually to confirm everything works asap but looks good code-wise
🎯 Changes
Adds a new
@tanstack/ai-groqpackage, a Groq AI adapter for TanStack AI.What's included:
groqText/createGroqText) — Streaming chat completions via Groq's OpenAI-compatible Chat Completions API, with full AG-UI event lifecycle (RUN_STARTED → TEXT_MESSAGE_START → TEXT_MESSAGE_CONTENT → TEXT_MESSAGE_END → RUN_FINISHED)additionalProperties: false)The adapter follows the established tree-shakeable adapter pattern used by
@tanstack/ai-grokand@tanstack/ai-openai. Uses thegroq-sdkpackage and handles Groq-specific differences likex_groq.usagefor token usage reporting.✅ Checklist
pnpm run test:pr.🚀 Release Impact
Summary by CodeRabbit
New Features
Documentation
Tests
Chores