The simplest, most advanced LLM prompting library.
const stream = ai.chat('How to make bread?')
let recipe = ''
for await (const chunk of stream) {
recipe += chunk
}
const fruit = await ai.chat('What company makes the iPhone?')
Stream data-aware:
const restaurantStream = ai
.chat('Give 5 top bars in London')
.getArray(d => d
.object(obj => obj
.hasString('name')
.hasString('address', /* optional hint */ 'Street and address only!')
)
)
for await (const restaurant /* type safe */ of restaurantStream) {
console.log(restaurant)
}
Note that you will see each restaurant logged as soon as the LLM outputs it. Note that Promptgun
- told the LLM what shape its output data should be,
- parsed that data to JS types,
- reorganized the stream so that each "event" is a complete element of the requested output array,
- correctly Typescript-typed each of those output elements and
- passed your optional hints
If the output type is not an array, streaming it simply gives the stream of accumulated partial parsed JSON, "what has come in so far", as it comes in. The incomplete data is parsed for you as best as possible even as the underlying JSON that is received is incomplete.
const parsedPartialJsonStream = ai
.chat('What is the best bar in Paris?')
.getObject(o => o
.hasString('name')
.hasString('address')
)
for await (const parsedPartialJson of parsedPartialJsonStream) {
// do stuff with parsed partial json
}
const restaurants /* type: {name: string, address?: string}[] */ = await ai
.chat('Give 5 top restaurants in London')
.getArray(o => o
.object(o => o
.hasString('name')
.canHaveString('description', 'A 50 character description')
)
)
await ai
.image('A black hole')
.imageSize('1024x1024') // optional
.model('gpt-image-1') // optional, default: gpt-image-1
.toFile('blackhole.png')
This writes the file, but it also returns a reference to that file:
const file = await ai
.image('A black hole')
.toFile('blackhole.png')
You can also avoid writing a file altogether and get the byte array directly:
const byteArray = await ai.image('A black hole')
Before you do any prompts, do:
setupAI({
promptGridApiKey: '<your PromptGrid API key>', // optional
apiKeys: {
openai: '<Your OpenAI API key>', // optional
// etc
},
})
Get your PromptGrid API key for free at PromptGrid.ai.
Post at our feedback and help board. We love to hear from you 👌☀️❤️.
By using Promptgun, some metadata of your prompt code, including the code of the callback you provide to the "completeChat" clause of a Promptgun call and where you call your prompts in your code, will be saved to the PromptGrid servers. The content of individual prompt calls will not be stored in PromptGrid unless you opt in at promptgrid.ai/prompts. You can delete any data stored on PromptGrid at any time.