promptgun
TypeScript icon, indicating that this package has built-in type declarations

0.14.16 • Public • Published

Promptgun

The simplest, most advanced LLM prompting library.

How to use

Text output – streamed

const stream = ai.chat('How to make bread?')
let recipe = ''
for await (const chunk of stream) {
  recipe += chunk
}

Text output – single

const fruit = await ai.chat('What company makes the iPhone?')

Data output - streamed

Stream data-aware:

const restaurantStream = ai
  .chat('Give 5 top bars in London')
  .getArray(d => d
    .object(obj => obj
      .hasString('name')
      .hasString('address', /* optional hint */ 'Street and address only!')
    )
  )
for await (const restaurant /* type safe */ of restaurantStream) {
  console.log(restaurant)
}

Note that you will see each restaurant logged as soon as the LLM outputs it. Note that Promptgun

  • told the LLM what shape its output data should be,
  • parsed that data to JS types,
  • reorganized the stream so that each "event" is a complete element of the requested output array,
  • correctly Typescript-typed each of those output elements and
  • passed your optional hints

If the output type is not an array, streaming it simply gives the stream of accumulated partial parsed JSON, "what has come in so far", as it comes in. The incomplete data is parsed for you as best as possible even as the underlying JSON that is received is incomplete.

const parsedPartialJsonStream = ai
  .chat('What is the best bar in Paris?')
  .getObject(o => o
    .hasString('name')
    .hasString('address')
  )
for await (const parsedPartialJson of parsedPartialJsonStream) {
  // do stuff with parsed partial json
}

Data output - single

const restaurants /* type: {name: string, address?: string}[] */ = await ai
  .chat('Give 5 top restaurants in London')
  .getArray(o => o
    .object(o => o
      .hasString('name')
      .canHaveString('description', 'A 50 character description')
    )
  )

Image out

await ai
  .image('A black hole')
  .imageSize('1024x1024') // optional
  .model('gpt-image-1') // optional, default: gpt-image-1
  .toFile('blackhole.png')

This writes the file, but it also returns a reference to that file:

const file = await ai
  .image('A black hole')
  .toFile('blackhole.png')

You can also avoid writing a file altogether and get the byte array directly:

const byteArray = await ai.image('A black hole')

Setup

Before you do any prompts, do:

setupAI({
  promptGridApiKey: '<your PromptGrid API key>', // optional
  apiKeys: {
    openai: '<Your OpenAI API key>', // optional
    // etc
  },
})

Get your PromptGrid API key for free at PromptGrid.ai.

Feedback and help

Post at our feedback and help board. We love to hear from you 👌☀️❤️.

Terms of use

By using Promptgun, some metadata of your prompt code, including the code of the callback you provide to the "completeChat" clause of a Promptgun call and where you call your prompts in your code, will be saved to the PromptGrid servers. The content of individual prompt calls will not be stored in PromptGrid unless you opt in at promptgrid.ai/prompts. You can delete any data stored on PromptGrid at any time.

Readme

Keywords

none

Package Sidebar

Install

npm i promptgun

Weekly Downloads

185

Version

0.14.16

License

See license in LICENSE.md

Unpacked Size

364 kB

Total Files

6

Last publish

Collaborators

  • devaro