Search results
10 packages found
Sort by: Default
- Default
- Most downloaded this week
- Most downloaded this month
- Most dependents
- Recently published
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
- llama
- llama-cpp
- llama.cpp
- bindings
- ai
- cmake
- cmake-js
- prebuilt-binaries
- llm
- gguf
- metal
- cuda
- vulkan
- grammar
- View more
a GGUF parser that works on remotely hosted files
Various utilities for maintaining Ollama compatibility with models on Hugging Face hub
llama.cpp gguf file parser for javascript
Chat UI and Local API for the Llama models
A browser-friendly library for running LLM inference using Wllama with preset and dynamic model loading, caching, and download capabilities.
Native Node.JS plugin to run LLAMA inference directly on your machine with no other dependencies.
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
- llama
- llama-cpp
- llama.cpp
- bindings
- ai
- cmake
- cmake-js
- prebuilt-binaries
- llm
- gguf
- metal
- cuda
- grammar
- json-grammar
- View more
Lightweight JavaScript package for running GGUF language models
a GGUF parser that works on remotely hosted files