A reverse-engineered Node.js client for interacting with the Blackbox.ai API. This module enables you to perform chat completions with both streaming and aggregated responses. It features router-style model naming for seamless IntelliSense auto-completion and throttles requests based on token usage.
Important:
This project is based on reverse engineering Blackbox.ai’s API. In order to make successful requests, you must supply a validated token. You can obtain this token by opening your browser’s Developer Tools while using Blackbox.ai and inspecting the network request payloads for thevalidated
field.
- Chat Completion: Easily create chat completions with support for streaming responses.
- Token Management: Uses a GPT-3 tokenizer to count tokens accurately and throttle requests based on tokens-per-second (TPS).
-
IntelliSense Support: Model names use a router-style format (e.g.,
deepseek/deepseek-r1
) to provide autocomplete suggestions in your editor. - Reverse Engineered: Designed to work with Blackbox.ai by reverse engineering their request flow.
Install the module via NPM:
npm install blackbox-ai-client
If the module is not published yet, clone the repository and run npm install
in the project directory.
Below is an example of how to use the BlackboxAI Client:
const { BlackboxAIClient } = require('blackbox-ai-client');
(async () => {
// Create an instance of the client.
// Replace "YOUR_VALIDATED_TOKEN" with the token you obtained from Blackbox.ai.
const client = new BlackboxAIClient({
validated: "YOUR_VALIDATED_TOKEN",
// Optionally, specify a model. Options:
// - deepseek/deepseek-r1 (designed for reasoning tasks)
// - deepseek/deepseek-v3 (versatile for various applications)
// - deepseek/deepseek-reasoner (alias for deepseek/deepseek-r1)
model: "deepseek/deepseek-v3"
});
try {
// Create a chat completion.
const response = await client.createChatCompletion({
messages: [
{ role: "user", content: "Hello, how do I use BlackboxAI?" }
],
// Set stream to false to get the aggregated response.
stream: false
});
console.log("Chat Completion Response:", response);
} catch (error) {
console.error("Error creating chat completion:", error);
}
})();
To retrieve the validated token:
- Open Blackbox.ai in your browser.
- Open the Developer Tools (usually by pressing
F12
or right-clicking and selecting Inspect). - Navigate to the Network tab and perform an action that sends a chat request.
- Find the request payload (look for the
/api/chat
endpoint) and locate thevalidated
field. - Copy the value of the
validated
token and use it in your client configuration.
-
Parameters:
-
options
(object) – Configuration options.-
baseUrl
(string, optional) – The API endpoint (default:https://www.blackbox.ai/api/chat
). -
model
(ModelName, optional) – The model to use for chat completions. See usage above. -
validated
(string, required) – The validation token obtained from Blackbox.ai. -
proxy
(string, optional) – Proxy URL if needed. -
maxTPS
(number, optional) – Maximum tokens per second allowed (default:Infinity
).
-
-
-
Description:
Creates a chat completion using the provided messages. Supports both streaming and aggregated responses. -
Parameters:
-
params
(object)-
messages
(Array) – An array of message objects withrole
andcontent
. -
model
(ModelName, optional) – Override the default model. -
trendingAgentMode
(object, optional) – Additional mode configuration. -
stream
(boolean, optional) – If set totrue
, returns a ReadableStream.
-
-
-
Returns:
A Promise that resolves to either an aggregated response object or a ReadableStream.
Contributions are welcome! If you have ideas or improvements, please open an issue or submit a pull request.
This project is licensed under the Apache-2.0 License.
This client is a reverse engineering project for educational and experimental purposes only. Use it responsibly and at your own risk.