chatgpt-repl

0.5.0 • Public • Published

ChatGPT REPL

ChatGPT interactive command line REPL, using the OpenAI API for per usage billing.

An experiment inspired by two tweets.

Features

  • Bring your own API key
  • Shell like REPL with history access and commands
  • Stream responses
  • Controllable conversation context
  • Clipboard copying
  • Local API usage tracking
  • Access input history from previous sessions

See the current roadmap for plans and ideas, including issues with a help wanted label. Feel free to open an issue if you encounter any bugs or have suggestions.

Preview

Screenshot of the chatgpt-repl shell command in action, replying with a reassuring haiku to the prompt 'write a haiku about benevolent AI overlords'

Install

Use your favourite package manager to install the chatgpt-repl command globally. With npm:

npm install -g chatgpt-repl

Usage

Once installed:

  1. Get an OpenAI API key.
  2. Run chatgpt-repl.
  3. Provided API key when asked.
  4. Enter a ChatGPT prompt ('Should Pluto be a planet?') and hit enter.
  5. See available commands with .help.
  6. Hit Ctrl-C or enter .exit to end the session when sufficiently filled with knowledge.
  7. Use the up/down arrows to access previously entered prompts or commands.

Responses are streamed in by default. This can be disabled with the .stream false command or the --stream false CLI option. Streamed responses currently don't include token usage from the API and not included in usage tracking (issue to estimate usage).

Cancel a request in progress with Ctrl-C at any time, even when streaming.

REPL input is saved into a history file (~/.chatgpt-repl-history or based onCHATGPTREPL_HISTORYFILE env). This can be disabled with the --history false CLI option. The last 100 (or CHATGPTREPL_HISTORYSIZE) history entries are loaded on startup.

Supported models

Currently, the default model is gpt-3.5-turbo. Use the .models command to see the list of supported models.

Supported models are retrieved from the OpenAI API and cached locally for 3 days. Run .models update to force an update.

What about GPT-4?

GPT-4 was announced on March 14th, 2023 and API support for it started out with a waitlist. If it's available to you, provide an alternate model name via --model gpt-4 (or the .model gpt-4 command).

Note however that GPT-4's pricing appears to be significantly higher than GPT-3's. The current API usage costs shown by this tool is based on GPT-3's pricing (issue for model specific pricing).

Command line options:

  • -v, --version Show version number

  • -h, --help Show help

  • -c, --clipboard <boolean> Enable/disable copying the latest response to the clipboard as it is shown (default: 'false' or CHATGPTREPL_CLIPBOARD env)

  • -k, --api-key <key> Set (and save) OpenAI API key (CHATGPTREPL_APIKEY env)

  • -l, --conversation-length Set conversation history length (default: 3 or CHATGPTREPL_CONVERSATIONLENGTH env)

  • --history-size Set history size (default: 100 orCHATGPTREPL_HISTORYSIZE env)

  • --history-file History file location (default: ~/.chatgpt-repl.history orCHATGPTREPL_HISTORYFILE env)

  • -m, --model Set the OpenAI model (default: gpt-3.5-turbo or CHATGPTREPL_MODEL env)

  • -t, --temperature Set the temperature for more 'random' responses (default: 1 or CHATGPTREPL_TEMPERATURE env)

  • -s, --system Set the system to set the tone of the response (default: 'You are a helpful assistant' or CHATGPTREPL_SYSTEM env)

  • -r, --stream Enable/disable streamed responses (default: 'true' or CHATGPTREPL_STREAM env)

  • -w, --word-wrap <boolean> Enable/disable automatic word wrapping in response output (default: 'true' or CHATGPTREPL_WORDWRAP env)

  • -x, --max-tokens <num> Set the max tokens to use and control costs (default: 1024 or CHATGPTREPL_MAXTOKENS env)

Defaults can be overridden with environment variables where indicated ('env').

Open AI API costs

OpenAI API usage is paid after a free trial, but extremely cheap. 1,000 tokens (currently) cost $0.002 for the gpt-3.5-turbo model used by this tool by default. Each word in a question and response uses at least 1 token.

Controlling conversation context

By default, the last 5 prompts/responses in a session are used in addition to a new prompt, to provide ChatGPT with additional context. This allows for follow-up prompts that reference a previous response, but also increases costs by using more API tokens.

The -l <num> option (or CHATGPTREPL_CONVERSATIONLENGTH environment variable or .conversationlength command) can be used to change this conversation length, by indicating how much of an earlier conversation to reuse. So -l 0 would not send any previous conversation context back and -l 1 would only use the most recent prompt/response for context.

Thus with a history length of zero, you couldn't ask 'What is the Sun?' and later 'How far away from the Earth is it?', since it would have no frame of reference for 'it'.

Enhancements to conversation tracking are planned.

Commands

Instead of entering a prompt, you can also use a number of commands to modify settings or perform other actions. Use .help for the full list:

  • .settings shows all the current settings
  • .reset resets one or all settings to when you launched the current session
  • .model, .temperature, .maxtokens, .conversationlength or .system followed by a value will change a setting, or show it without a value
  • .models [update] shows/force updates the list of supported models
  • .retry reruns the last prompt with the latest settings
  • .last shows the last response again without running it through ChatGPT again (applies new word wrap/clipboard settings)
  • .clear clears the conversation history
  • .copy copies the last response to a prompt to your system clipboard
  • .clipboard enabled/disabled clipboard copying for every response (off by default)
  • .wordwrap enables/disables word wrapping for the response output (on by default)
  • .history enables/disables input logging to history file (on by default)
  • .stream enables/disables streamed responses (on by default)
  • .usage shows current and total API usage
  • .exit ends the current session

Values to set boolean fields can include true, false, 1, 0, on, off etc.

Hitting tab will attempt to complete a command.

Even ChatGPT loves this!

When asked 'What are the benefits of a ChatGPT command line interface?', it wholeheartedly endorsed this tool:

There are several benefits of ChatGPT command line interface, including:

  1. Faster and More Efficient Communication: With command line interface, users can communicate with the ChatGPT model much faster and efficiently than with a graphical interface.

  2. Improved Productivity: With command line interface, users can execute multiple commands in quick succession, which can help increase productivity.

  3. Ease of Use: Command line interface is generally easier to use for users who are familiar with the command line interface. The syntax, while it may look complicated for beginners, is generally quicker than using a graphical interface.

  4. Customizability: By using a command line interface, users can customize their experience with ChatGPT based on their needs and preferences.

  5. Script Automation: ChatGPT command line interface can be used to create scripts that can be used to automate certain tasks, which can help save time and increase productivity.

Stack

Some of the libraries used:

REPL prompt plus history uses the built-in Node readline API.

- Joost Schuur (@joostschuur)

Versions

Current Tags

  • Version
    Downloads (Last 7 Days)
    • Tag
  • 0.5.0
    0
    • latest

Version History

Package Sidebar

Install

npm i chatgpt-repl

Weekly Downloads

0

Version

0.5.0

License

ISC

Unpacked Size

514 kB

Total Files

21

Last publish

Collaborators

  • jschuur