Make your LLM prompts executable and version controlled.
In your Express server:
yarn add spellbook-forge
import { spellbookForge } from "spellbook-forge";
const app = express()
.use(spellbookForge({
gitHost: 'https://github.com'
}))
and then:
http://localhost:3000/your/repository/prompt?execute
<-- HTTP 200
{
"prompt-content": "Complete this phrase in coders’ language: Hello …",
"model": "gpt3.5",
"result": "Hello, World!"
}
This is an ExpressJS middleware that allows you to create an API interface for your LLM prompts. It will automatically generate a server for your prompts stored in a git repository. Using Spellbook, you can:
- Store & manage LLM prompts in a familiar tool: a git repository
- Execute prompts with chosen model and get results using a simple API
- Perform basic CRUD operations on prompts
Note: It's an early version. Expect bugs, breaking changes and poor performance.
Full documentation coming soon!
Prompts must adhere to a specific format (JSON/YAML). See more info here
├── prompt1
│ ├── prompt.json
│ └── readme.md
└── collection
└── prompt2
├── prompt.yaml
└── readme.md
The above file structure will result in the following API endpoints being generated:
{host}/prompt1
{host}/collection/prompt2
-
prompt.json
the main file with the prompt content and configuration. -
readme.md
additional information about prompt usage, examples etc.