langchain-llama
TypeScript icon, indicating that this package has built-in type declarations

0.0.1 • Public • Published

Langchain-LLAMA

⚠️ WIP ⚠️

Run LLAMA LLMs locally in langchain. Ported from linonetwo/langchain-alpaca

npm i langchain-llama

Usage

This example uses the ggml-vicuna-7b-4bit-rev1 model

import { LLAMACPP } from 'langchain-llama'

const main = async () => {
    const vicuna = new LLAMACPP({ 
        model: './vicuna/ggml-vicuna-7b-4bit-rev1.bin', // Path to model
        executablePath: './vicuna/main.exe', // Path to binary
        params:  [ // Parameters to pass to the binary
            '-i',
            '--interactive-first',
            '-t',
            '8',
            '--temp',
            '0',
            '-c',
            '2048',
            '-n',
            '-1',
            '--ignore-eos',
            '--repeat_penalty',
            '1.2'
        ] 
    })
    await vicuna.init()
    const response = await vicuna.generate(['Say "Hello World"'])
    console.log(response.generations)

}

main()

This project is still a work in progress. Better docs will be added soon

Package Sidebar

Install

npm i langchain-llama

Weekly Downloads

1

Version

0.0.1

License

MIT

Unpacked Size

11.9 kB

Total Files

11

Last publish

Collaborators

  • alensaito1