@synstack/llm

2.3.4 • Public • Published

@synstack/llm

Immutable, chainable, and type-safe wrapper of Vercel's AI SDK.

Installation

pnpm add @synstack/llm ai zod
yarn add @synstack/llm ai zod
npm install @synstack/llm ai zod

To add models you need to install the appropriate provider package:

pnpm add @ai-sdk/openai // or @ai-sdk/[provider-name]
yarn add @ai-sdk/openai // or @ai-sdk/[provider-name]
npm install @ai-sdk/openai // or @ai-sdk/[provider-name]

Features

Completion Building

The completion builder provides a type-safe API to configure LLM completions:

import { completion } from "@synstack/llm"; // or @synstack/synscript/llm
import { openai } from "@ai-sdk/openai";

const baseCompletion = completion
  .model(openai("gpt-4"))
  .maxTokens(20)
  .temperature(0.8);

const imageToLanguagePrompt = (imagePath: string) => [
  systemMsg`
    You are a helpful assistant that can identify the language of the text in the image.
  `,
  userMsg`
    Here is the image: ${filePart.fromPath(imagePath)}
  `,
  assistantMsg`
    The language of the text in the image is
  `,
];

const imageToLanguageAgent = (imagePath: string) =>
  baseCompletion.prompt(imageToLanguagePrompt(imagePath)).generateText();

Model Configuration

  • model(): Set the language model
  • maxTokens(): Set maximum tokens to generate
  • temperature(): Set temperature (0-1)
  • topP(), topK(): Configure sampling parameters
  • frequencyPenalty(), presencePenalty(): Adjust output diversity
  • seed(): Set random seed for deterministic results

Flow Control

  • maxSteps(): Maximum number of sequential LLM calls
  • maxRetries(): Number of retry attempts
  • stopSequences(): Define sequences that stop generation
  • abortSignal(): Cancel ongoing completions

Generation Methods

  • generateText(): Generate text completion
  • streamText(): Stream text completion
  • generateObject(): Generate structured object
  • streamObject(): Stream structured object

Message Building

Messages can be built using template strings with various features:

  • Add promises or array of promises in your template string as if they were synchronous
  • Format your prompt for readability with automatic trimming and padding removal
  • Type-safe template values that prevent invalid prompt content

Template-based message builders for different roles:

// System messages
systemMsg`
  You are a helpful assistant.
`;

// User messages with support for text, images and files
userMsg`
  Here is the image: ${filePart.fromPath("./image.png")}
`;

// Assistant messages with support for text and tool calls
assistantMsg`
  The language of the text in the image is
`;

Advanced Message Configuration

The package provides customization options for messages with provider-specific settings:

// User message with cache control
const cachedUserMsg = userMsg.cached`
  Here is the image: ${filePart.fromPath("./image.png")}
`;

// Custom provider options for user messages
const customUserMsg = userMsgWithOptions({
  providerOptions: { anthropic: { cacheControl: { type: "ephemeral" } } },
})`Hello World`;

// Custom provider options for assistant messages
const customAssistantMsg = assistantMsgWithOptions({
  providerOptions: { openai: { cacheControl: { type: "ephemeral" } } },
})`Hello World`;

// Custom provider options for system messages
const customSystemMsg = systemMsgWithOptions({
  providerOptions: { anthropic: { system_prompt_behavior: "default" } },
})`Hello World`;

File Handling

The filePart utility provides methods to handle files and images, and supports automatic mime-type detection:

// Load from file system path
filePart.fromPath(path, mimeType?)

// Load from base64 string
filePart.fromBase64(base64, mimeType?)

// Load from URL
filePart.fromUrl(url, mimeType?)

Tool Usage

Tools can be configured in completions for function calling with type safety:

const completion = baseCompletion
  .tools({
    search: {
      description: "Search for information",
      parameters: z.object({
        query: z.string(),
      }),
    },
  })
  .activeTools(["search"])
  .toolChoice("auto"); // or 'none', 'required', or { type: 'tool', toolName: 'search' }

Model Middlewares

The library provides middleware utilities to enhance model behavior:

import { includeAssistantMessage, cacheCalls } from "@synstack/llm/middleware";
import { fsCache } from "@synstack/fs-cache";

// Apply middlewares to completion
const completion = baseCompletion
  .middlewares([includeAssistantMessage]) // Include last assistant message in output
  .prependMiddlewares([cacheCalls(cache)]); // Cache model responses

// Apply middlewares directly to the model
const modelWithAssistant = includeAssistantMessage(baseModel);
const modelWithCache = cacheCalls(cache)(baseModel);
  • middlewares(): Replace the middlewares
  • prependMiddlewares(): Add middlewares to the beginning of the chain to be executed first
  • appendMiddlewares(): Add middlewares to the end of the chain to be executed last

For more details on available options, please refer to Vercel's AI SDK documentation:

Versions

Current Tags

VersionDownloads (Last 7 Days)Tag
2.3.464latest

Version History

VersionDownloads (Last 7 Days)Published
2.3.464
2.3.36
2.3.22
2.3.12
2.3.02
2.2.01
2.1.121
2.1.111
2.1.101
2.1.93
2.1.81
2.1.71
2.1.61
2.1.51
2.1.41
2.1.31
2.1.21
2.1.11
2.1.01
2.0.31
2.0.21
2.0.11
2.0.01
1.5.01
1.4.41
1.4.31
1.4.12
1.4.01
1.3.11
1.2.131
1.2.122
1.2.112
1.2.101
1.2.91
1.2.82
1.2.72
1.2.62
1.2.51
1.2.41
1.2.32
1.2.22
1.2.11
1.2.01
1.0.92
1.0.82
1.0.71
1.0.61
1.0.52
1.0.42
1.0.31
1.0.12

Package Sidebar

Install

npm i @synstack/llm

Weekly Downloads

137

Version

2.3.4

License

Apache-2.0

Unpacked Size

312 kB

Total Files

23

Last publish

Collaborators

  • yleflour_pairprog