Simple AI API package for Node.js.
npm i @jnode/ai
const { AIService, AIModel, AIConversation, AIAgent, AIFunction } = require('@jnode/ai');
const { OAIChatService } = require('@jnode/ai/openai-chat');
const { GeminiService } = require('@jnode/ai/gemini');
const { ClaudeService } = require('@jnode/ai/claude');const { OAIChatService } = require('@jnode/ai/openai-chat');
const service = new OAIChatService({ auth: 'sk-your-openai-api-key' });
const model = service.model('gpt-4o');
const agent = new AIAgent(model, {
instructions: 'You are a helpful assistant.'
});
(async () => {
const conversation = await agent.interact('Hello, how are you?');
console.log(conversation.last.components[0].content);
})();const { GeminiService } = require('@jnode/ai/gemini');
const { AIAgent, AIFunction } = require('@jnode/ai');
const service = new GeminiService({ auth: 'your-gemini-api-key' });
const model = service.model('gemini-2.5-flash');
const agent = new AIAgent(model, {
functions:[
new AIFunction(
'get_weather',
'Get current weather for a location.',
{
type: 'object',
properties: { location: { type: 'string' } },
required: ['location']
},
async (args) => {
return { weather: 'Sunny', temp: 25, location: args.location };
}
)
]
});
(async () => {
const stream = agent.streamInteract('What is the weather in Taipei?');
for await (const event of stream) {
if (event.type === 'continue' && event.content) {
process.stdout.write(event.content);
}
}
})();Our world-leading AI Agent framework brings you a simple, fast, and extensible development experience across different AI providers.
Here's what @jnode/ai will do:
- Define an agent with a model and specific configurations.
- Build a conversation with the agent.
- Use the agent or conversation to interact with the underlying model, executing built-in or custom functions automatically.
Pretty simple, isn't it?
Further, an agent holds configuration (like system instructions and generation options), and interactions seamlessly return or stream updated conversations containing standard components (text, file, tool calls, and thoughts) allowing you to freely switch between models such as Claude, Gemini, and OpenAI.
A base class representing an AI service provider. Providers like OAIChatService, GeminiService, and ClaudeService extend or implement this interface.
options<Object>
name<string> The unique model name (e.g.,'gpt-4o').options<Object> Options to override service-level options.- Returns:<AIModel>
options<Object> Request options overriding service auth.- Returns: <Promise> Resolves to a list of models.
Represents an interactive AI model. Extended by provider-specific models like OAIChatModel, GeminiModel, and ClaudeModel.
service<AIService> The parent service instance.name<string> Model name.options<Object> Model-specific options.
options<Object>- Returns:<Promise> Resolves to a structured object describing the model's metadata and capabilities (e.g.,
features.reasoning,features.multimodalCapabilities,features.actions).
agent<AIAgent> | <Object> The agent to base generation configs on.conversation<AIConversation> | <Array> | <string> The current message context.context[<any>] Context passed to underlying tool functions and actions.options<Object>- Returns: <Promise> Resolves to an <AIConversation> containing the new messages and updated
meta.
- Same parameters as
model.interact(). - Returns:<AsyncGenerator> Yields stream events like
{ type: 'component', component },{ type: 'continue', content }, and{ type: 'end', conversation }.
Holds the unified generation settings and tools.
model<AIModel> The default model interface.agent<Object>temperature<number> Generation temperature0.0~2.0.topP|top_p<number> Top P0.0~1.0.topK|top_k<number> Top K>= 1.seed<number> Random seed.outputLimit|output_limit<number> Max output tokens limit.stopStrings|stop_strings<string[]> Array of stop sequences.logprobs<boolean> Enable log probabilities.frequencyPenalty|frequency_penalty<number> Frequency penalty-2.0~2.0.presencePenalty|presence_penalty<number> Presence penalty-2.0~2.0.thinkingLevel|thinking_level<string> Extended reasoning level:'none','low','medium','high'.responseSchema|response_schema<Object> JSON schema for structured JSON output.instructions<string> System prompt/core instructions.actions<Array> Array of inline functions or native actions.functions<Array> Array of tool functions.x<Object> Platform/model specific data escapes.
Shorthand for calling interact() on the agent's attached model.
Shorthand for calling streamInteract() on the agent's attached model.
Represents a parsed conversation history format with unified components. Internally uses role and an array of components containing types like text, thought, file, function_call, function_response, and action.
agent<AIAgent> The agent handling the conversation context.conversation<Array> |<string> Can be a single prompt string, a single component, an array of components, or an array of full message turns.
- Type: <Object> |
null
Gets the last message turn in the conversation.
Pushes new conversation turns and interacts using the associated agent.
Pushes new conversation turns and stream interacts using the associated agent.
Appends newly parsed message turns to the history.
Returns a new instance of AIConversation cloning the same history and attached agent.
name<string> Function name.description<string> Function description.parameters<Object> JSON schema defining parameters.fn<Function> The execution handler:async (args, ctx) => any. Can return raw data or anAIFunctionResponse.options<Object>
Returns function descriptors formatted for requests.
Executes the function safely, automatically wrapping raw results or errors in an AIFunctionResponse.
functions<Array> Array ofAIFunctioninstances. When passed into an agent config, it will automatically unpack all tools from.kit.
A remote network boundary API for evaluating standard AIFunction objects on remote servers.
url<string> Remote API server URL.config[<any>] Passed configurations.options<Object> Local cached descriptors and authorization headers.
A wrapper class explicitly to pass-through capabilities integrated at the LLM provider side (such as built-in search/code-execution).
name<string> E.g.,'@google_search'or'@code_execution'. Must start with@.config[<any>] Tool-specific configurations matching the native provider settings.
Represents normalized tool/function execution result boundaries.
status<string> Response status (e.g.,'success','error', or'blocked').name<string> Function execution name.result[<any>] Data payload directly serialized.attachments<Array>meta<Object>
@jnode/ai currently builds in standard sub-modules connecting official endpoints:
- OpenAI Chat (
@jnode/ai/openai-chat) exportsOAIChatServiceandOAIChatModel. - Gemini (
@jnode/ai/gemini) exportsGeminiServiceandGeminiModel. - Claude (
@jnode/ai/claude) exportsClaudeServiceandClaudeModel.