llamacpp-ai-provider
TypeScript icon, indicating that this package has built-in type declarations

0.0.9 • Public • Published

llamacpp-ai-provider

Vercel AI Provider for running Large Language Models locally using LLamaCpp without a server

🚧🚧 WARNING! Under Construction 🚧🚧

This project is under active construction and current depends on the node-llama-cpp library. This will be replaced by a low level API with direct integration to LLamaC++ library. See the roadmap section for more details.

Features

  • Vercel AI full stack support
  • Run local LLMs directly without server dependency
  • Supported by most of the models that are supported by the LLamaC++ library

Roadmap

  • [x] Direct integration with LLamaC++
  • [x] Support Completion Language Model

Installation

npm install --save ai llamacpp-ai-provider

The example below expects the llama-2 model file installed in the models folder in the project root folder. You can download a GGUF compatible file from Hugging Face.

Usage

import { experimental_generateText } from "ai";
import { LLamaCpp } from "../index.js";
import { fileURLToPath } from "url";
import path from "path";

const modelPath = path.join(
  path.dirname(fileURLToPath(import.meta.url)),
  "../../models",
  "llama-2-7b-chat.Q4_K_M.gguf"
);

const llamacpp = new LLamaCpp(modelPath);

experimental_generateText({
  model: llamacpp.completion(),
  prompt: "Invent a new holiday and describe its traditions.",
}).then(({ text, usage, finishReason }) => {
  console.log(`AI: ${text}`);
});

For more examples, see the getting started guide

Readme

Keywords

Package Sidebar

Install

npm i llamacpp-ai-provider

Weekly Downloads

13

Version

0.0.9

License

MIT

Unpacked Size

1.57 MB

Total Files

23

Last publish

Collaborators

  • nancenick