Search results

10 packages found

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

published version 3.7.0, a month ago32 dependents licensed under $MIT
37,190

a GGUF parser that works on remotely hosted files

published version 0.1.14, 2 months ago7 dependents licensed under $MIT
15,772

Various utilities for maintaining Ollama compatibility with models on Hugging Face hub

published version 0.0.8, 15 days ago0 dependents licensed under $MIT
3,256

llama.cpp gguf file parser for javascript

published version 0.2.2, a year ago1 dependents licensed under $MIT
1,030

Chat UI and Local API for the Llama models

published version 3.2.2, 10 months ago0 dependents licensed under $MIT
304

A browser-friendly library for running LLM inference using Wllama with preset and dynamic model loading, caching, and download capabilities.

published version 0.1.3, 10 days ago0 dependents licensed under $MIT
107

Native Node.JS plugin to run LLAMA inference directly on your machine with no other dependencies.

published version 0.3.0, 3 months ago0 dependents licensed under $MIT
38

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level

published version 0.1.0, 9 months ago0 dependents licensed under $MIT
36

Lightweight JavaScript package for running GGUF language models

published version 0.1.0, a month ago0 dependents licensed under $MIT
12

a GGUF parser that works on remotely hosted files

published version 0.1.12-dev, 2 months ago0 dependents licensed under $MIT
8