Smart request balancer
Smart request queue with fine tuning of rates and limits of queue execution
npm install smart-request-balancer
yarn add smart-request-balancer
const Queue = ;
Imagine you have some telegram bot and you need to follow telegram rules of sending messages. You have this page on Telegram bot API which says that your bot cannot send more than 1 message per second to person and not more 20 messages per minute to group/chat/channel. You can easily configure it in smart request balancer:
const queue =rules:telegramIndividual: // Rule for sending private message via telegram APIrate: 1 // one messagelimit: 1 // per secondpriority: 1telegramGroup: // Rule for sending group message via telegram APIrate: 20 // 20 messageslimit: 60 // per minute;
And then just send this message easily:
const axios = ;queue// our actual response;
Here you see that we are calling
queue.request() with 3 parameters:
fnRequest handler: promise which will be executed
keyUnique key of request: For example, user_id of chat
ruleRule name: Rule which we configured at queue creation
Also you can see that we are handling retry in request handler. That's our plan B in
order that Telegram API somehow gets requests overflow. Just call this retry function with some
and this request will be fulfilled right after this time.
const queue =rules: // Describing our rules by rule namecommon: // Common rule. Will be used if you won't provide rule argumentrate: 30 // Allow to send 30 messageslimit: 1 // per 1 secondpriority: 1 // Rule priority. The lower priority is, the higher chance that// this rule will execute fasterdefault: // Default rules (if provided rule name is not foundrate: 30limit: 1overall: // Overall queue rates and limitsrate: 30limit: 1retryTime: 300 // Default retry time. Can be configured in retry fnignoreOverallOverheat: true // Should we ignore overheat of queue itself
For making requests you should provide callback which will have one argument called
retry and should return promise
const key = user_id; // Some telegram user idconst rule = 'telegramIndividual'; // Our rule for sending messages to chatsqueue;
You can use any available promise-based library to make requests. Promise resolve will be transferred further.
queue.request(...) will return promise which will resolve only when our queue will execute our request and get results.
Let's extend our previous example:
queue// our actual response// our request error (excluding 429)
Each rule has it's own priority. This was made to allow more urgent request execute faster than less urgent. Imagine you have two rules: for individual messages and for broadcasting. Broadcasting can be a hard routine and you should not totally wait for it to finish in order to send private message to somebody. In that case you should put priority 1 for private messages and priority 2 for broadcasting. In that case our queue will send broadcasting continuously but as soon as it gets private message it will interrupt broadcasting, send message and continue.
request(handler: (retry: RetryFunction) => Promise, key: string, rule: string) => PromiseThe main entrypoint for making requests with this library
get totalLength(): number- Getter for total length of queue
get isOverheated(): boolean- Getter for verifying is our queue is overheated
Getting retry error
You should use
retry function in request in order to set retry for this request.
You can easily determine it by HTTP 429 code. Sometimes servers also return
retry_after param which you can pass
retry function and set retry interval for this request. You don't need to do anything special. Our promise will only be resolved
when server will respond us correctly.
Sometimes you need to set overall overheat of queue (e.g. Telegram API has restriction to not send more than 30 messages per second overall).
For that purpose you can configure
overall rule in config and set
ignoreOverallOverheat to false.
In order to debug queue you can use environment variable
You can use this queue not only for API requests. This library can also be used for any routines which should be queued and executed sequentially based on rules, grouping, priority and ability to retry.