Async Interval Queue
A simple, no dependencies queue for scheduling jobs to run on an interval. Built originally to space out API requests and to minimize code changes when used. Simply replace any async call with the add function, and you get a promise that returns the expected value when the job is run.
The queue is also able to requeue a job if it fails, to a variable number of retries specific to the job.
The queue also includes a decorator (not supported in JS yet, so no fancy @ notation) to wrap commonly used functions.
A sharded version of this queue that can support cluster workloads is published on npm as sharded-interval-queue.
Installation
npm install async-interval-queue
Usage
Creating an instance
const AsyncQueue = ; // Create a new queue with running interval in mslet myQueue = 1000; // Let's make an async function for tests { return value;}
Adding jobs using a thunk
myQueue;myQueue; myQueuestart;
Wrapping functions with the decorator
let myQueuedFunc = myQueue; ;; myQueuestart;
The queue starts by default on a job being added. The second parameter to add can be set to true to prevent this.
The queue will stop when there are no jobs left. You can restart it manually, or add a job without the doNotStart parameter.
Requeuing and optional parameters
add
has three optional parameters which can also be passed to the decorator:
doNotStart
- if True, the queue is not started when this job is added.retries
- Number of retries. If greater than zero, the job is requeued if it fails, the promise returning the first successful or last run.
For example, to enqueue a job without starting the queue, and 3 retries:
// Thunk citymyQueue; // Decoratorlet myDecorator = myQueue;
Contributing
Pull requests are welcome. This queue is being used in production at Greywing, so I'd be happy to hear about how we can improve it.
Future work and improvements
This version of the queue does not support multi-threaded or cluster workloads. Next step is to integrate it with a persistent cache to allow multiple programs to keep their own queues but run synchronized.