Search results

70 packages found

use `npm i --save llama.native.js` to run lama.cpp models on your local machine. features a socket.io server and client that can do inference with the host of the model.

published 1.1.0 a year ago
M
Q
P

Testing @xenova's v3 branch

published 3.0.0-alpha.0 2 days ago
M
Q
P

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!

published 3.0.0 12 days ago
M
Q
P

Use the Space mini_header outside Hugging Face

published 1.0.1 3 days ago
M
Q
P

Transformer neural networks in the browser

published 0.1.0 2 years ago
M
Q
P

An in-memory semantic search database using AI

published 1.3.0 a year ago
M
Q
P

simple huggingface inference module

published 0.0.3 2 years ago
M
Q
P

An API to simplify Weaviate vector db queries

published 1.1.1 a year ago
M
Q
P

serve websocket GGML 4/5bit Quantized LLM's based on Meta's LLaMa model with llama.ccp

published 0.1.0 a year ago
M
Q
P

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!

published 2.17.2-wasm-fix 14 days ago
M
Q
P