openai-ext
TypeScript icon, indicating that this package has built-in type declarations

1.2.7 • Public • Published

🤖 openai-ext

Extension to OpenAI's API to support streaming chat completions.

npm Version  View project on GitHub  Deploy Status  Sponsor

👁️ Live Demo

Live Demo

Documentation

Read the official documentation.

View the live demo.

Overview

This project extends OpenAI's API to support streaming chat completions on both the server (Node.js) and client (browser).

Note: This is an unofficial working solution until OpenAI adds streaming support. This issue is being tracked here: How to use stream: true? #18.

Features include:

  • 💻 Support for streaming chat completions
    • Easy to use API extension for chat completion streaming support.
  • ⚙️ Easy to configure
    • Dead simple configuration for API and stream handlers.
  • 📜 Content draft parsing
    • Content is parsed for you and provided in an easy to digest format as it streams.
  • 🌎 Works in both server (Node.js) and client (browser) environments
    • Stream completions in either environment: Node.js or in the browser!
  • 🛑 Support for stopping completions
    • Stop completions before they finish, just like ChatGPT allows.

Donate

If this project helped you, please consider buying me a coffee. Your support is much appreciated!

Buy me a coffee Buy me 3 coffees Buy me 5 coffees

Table of Contents

Installation

npm i openai-ext

Quick Start

Browser / Client

View the live demo.

Use the following solution in a browser environment:

import { OpenAIExt } from "openai-ext";

// Configure the stream (use type ClientStreamChatCompletionConfig for TypeScript users)
const streamConfig = {
  apiKey: `123abcXYZasdf`, // Your API key
  handler: {
    // Content contains the string draft, which may be partial. When isFinal is true, the completion is done.
    onContent(content, isFinal, xhr) {
      console.log(content, "isFinal?", isFinal);
    },
    onDone(xhr) {
      console.log("Done!");
    },
    onError(error, status, xhr) {
      console.error(error);
    },
  },
};

// Make the call and store a reference to the XMLHttpRequest
const xhr = OpenAIExt.streamClientChatCompletion(
  {
    model: "gpt-3.5-turbo",
    messages: [
      { role: "system", content: "You are a helpful assistant." },
      { role: "user", content: "Tell me a funny joke." },
    ],
  },
  streamConfig
);
// If you'd like to stop the completion, call xhr.abort(). The onDone() handler will be called.
xhr.abort();

Node.js / Server

Use the following solution in a Node.js or server environment:

import { Configuration, OpenAIApi } from 'openai';
import { OpenAIExt } from "openai-ext";

const apiKey = `123abcXYZasdf`; // Your API key
const configuration = new Configuration({ apiKey });
const openai = new OpenAIApi(configuration);

// Configure the stream (use type ServerStreamChatCompletionConfig for TypeScript users)
const streamConfig = {
  openai: openai,
  handler: {
    // Content contains the string draft, which may be partial. When isFinal is true, the completion is done.
    onContent(content, isFinal, stream) {
      console.log(content, "isFinal?", isFinal);
    },
    onDone(stream) {
      console.log('Done!');
    },
    onError(error, stream) {
      console.error(error);
    },
  },
};

const axiosConfig = {
  // ...
};

// Make the call to stream the completion
OpenAIExt.streamServerChatCompletion(
  {
    model: 'gpt-3.5-turbo',
    messages: [
      { role: 'system', content: 'You are a helpful assistant.' },
      { role: 'user', content: 'Tell me a funny joke.' },
    ],
  },
  streamConfig,
  axiosConfig
);

If you'd like to stop the completion, call stream.destroy(). The onDone() handler will be called.

const response = await OpenAIExt.streamServerChatCompletion(...);
const stream = response.data;
stream.destroy();

You can also stop completion using an Axios cancellation in the Axios config (pending #134).

Content Parsing Utility

Under the hood, the function OpenAIExt.parseContentDraft(dataString) is used to extract completion content from a data string when streaming data in this library.

Feel free to use this if you'd like to handle streaming in a different way than this library provides.

The data string contains lines of JSON completion data starting with data: that are separated by two newlines. The completion is terminated by the line data: [DONE] when the completion content can be considered final and done.

When passed a data string, the function returns completion content in the following shape:

{
  content: string; // Content string. May be partial.
  isFinal: boolean; // When true, the content string is complete and the completion is done.
}

If you're using this library for streaming completions, parsing is handled for you automatically and the result will be provided via the onContent handler callback documented above.

TypeScript

Type definitions have been included for TypeScript support.

Icon Attribution

Favicon by Twemoji.

Contributing

Open source software is awesome and so are you. 😎

Feel free to submit a pull request for bugs or additions, and make sure to update tests as appropriate. If you find a mistake in the docs, send a PR! Even the smallest changes help.

For major changes, open an issue first to discuss what you'd like to change.

Found It Helpful? Star It!

If you found this project helpful, let the community know by giving it a star: 👉⭐

License

See LICENSE.md.

Readme

Keywords

none

Package Sidebar

Install

npm i openai-ext

Weekly Downloads

412

Version

1.2.7

License

MIT

Unpacked Size

283 kB

Total Files

26

Last publish

Collaborators

  • justinmahar