Immutable, chainable, and type-safe wrapper of Vercel's AI SDK.
pnpm add @synstack/llm ai zod
yarn add @synstack/llm ai zod
npm install @synstack/llm ai zod
To add models you need to install the appropriate provider package:
pnpm add @ai-sdk/openai // or @ai-sdk/[provider-name]
yarn add @ai-sdk/openai // or @ai-sdk/[provider-name]
npm install @ai-sdk/openai // or @ai-sdk/[provider-name]
The completion builder provides a type-safe API to configure LLM completions:
import { completion } from "@synstack/llm"; // or @synstack/synscript/llm
import { openai } from "@ai-sdk/openai";
const baseCompletion = completion
.model(openai("gpt-4"))
.maxTokens(20)
.temperature(0.8);
const imageToLanguagePrompt = (imagePath: string) => [
systemMsg`
You are a helpful assistant that can identify the language of the text in the image.
`,
userMsg`
Here is the image: ${filePart.fromPath(imagePath)}
`,
assistantMsg`
The language of the text in the image is
`,
];
const imageToLanguageAgent = (imagePath: string) =>
baseCompletion.prompt(imageToLanguagePrompt(imagePath)).generateText();
-
model()
: Set the language model -
maxTokens()
: Set maximum tokens to generate -
temperature()
: Set temperature (0-1) -
topP()
,topK()
: Configure sampling parameters -
frequencyPenalty()
,presencePenalty()
: Adjust output diversity -
seed()
: Set random seed for deterministic results
-
maxSteps()
: Maximum number of sequential LLM calls -
maxRetries()
: Number of retry attempts -
stopSequences()
: Define sequences that stop generation -
abortSignal()
: Cancel ongoing completions
-
generateText()
: Generate text completion -
streamText()
: Stream text completion -
generateObject()
: Generate structured object -
streamObject()
: Stream structured object
Messages can be built using template strings with various features:
- Add promises or array of promises in your template string as if they were synchronous
- Format your prompt for readability with automatic trimming and padding removal
- Type-safe template values that prevent invalid prompt content
Template-based message builders for different roles:
// System messages
systemMsg`
You are a helpful assistant.
`;
// User messages with support for text, images and files
userMsg`
Here is the image: ${filePart.fromPath("./image.png")}
`;
// Assistant messages with support for text and tool calls
assistantMsg`
The language of the text in the image is
`;
The package provides customization options for messages with provider-specific settings:
// User message with cache control
const cachedUserMsg = userMsg.cached`
Here is the image: ${filePart.fromPath("./image.png")}
`;
// Custom provider options for user messages
const customUserMsg = userMsgWithOptions({
providerOptions: { anthropic: { cacheControl: { type: "ephemeral" } } },
})`Hello World`;
// Custom provider options for assistant messages
const customAssistantMsg = assistantMsgWithOptions({
providerOptions: { openai: { cacheControl: { type: "ephemeral" } } },
})`Hello World`;
// Custom provider options for system messages
const customSystemMsg = systemMsgWithOptions({
providerOptions: { anthropic: { system_prompt_behavior: "default" } },
})`Hello World`;
The filePart
utility provides methods to handle files and images, and supports automatic mime-type detection:
// Load from file system path
filePart.fromPath(path, mimeType?)
// Load from base64 string
filePart.fromBase64(base64, mimeType?)
// Load from URL
filePart.fromUrl(url, mimeType?)
Tools can be configured in completions for function calling with type safety:
const completion = baseCompletion
.tools({
search: {
description: "Search for information",
parameters: z.object({
query: z.string(),
}),
},
})
.activeTools(["search"])
.toolChoice("auto"); // or 'none', 'required', or { type: 'tool', toolName: 'search' }
The library provides middleware utilities to enhance model behavior:
import { includeAssistantMessage, cacheCalls } from "@synstack/llm/middleware";
import { fsCache } from "@synstack/fs-cache";
// Apply middlewares to completion
const completion = baseCompletion
.middlewares([includeAssistantMessage]) // Include last assistant message in output
.prependMiddlewares([cacheCalls(cache)]); // Cache model responses
// Apply middlewares directly to the model
const modelWithAssistant = includeAssistantMessage(baseModel);
const modelWithCache = cacheCalls(cache)(baseModel);
-
middlewares()
: Replace the middlewares -
prependMiddlewares()
: Add middlewares to the beginning of the chain to be executed first -
appendMiddlewares()
: Add middlewares to the end of the chain to be executed last
For more details on available options, please refer to Vercel's AI SDK documentation: