distributed job queue using redis
shutdownand wait for the callback. This will shutdown the queue gracefully, allowing any in progress jobs to complete. This works well in tandem with naught
var JobQueue = require'redis-dist-job-queue';var jobQueue = ;jobQueueregisterTask"./foo_node_module";jobQueuestart;var options = resourceId: 'resource_id' params: theString: "derp";jobQueuesubmitJob'thisIsMyTaskId' optionsif err throw err;console.info"job submitted";;jobQueueon'error'console.error"job queue error:" errstack;;
moduleexports =id: 'thisIsMyTaskId'console.infoparamstheStringtoUpperCase;callback;;
namespace- redis key namespace. Defaults to
queueId- the ID of the queue on redis that we are connecting to. Defaults to
redisConfig- defaults to an object with properties:
childProcessCount- If set to 0 (the default), no child processes are created. Instead all jobs are performed in the current process. If set to a positive number, that many worker child processes are created.
workerCount- number of workers for this JobQueue instance, per child process. So if
childProcessCountis positive, total concurrency for this server is
childProcessCount * workerCount. If
0, total concurrency for this server is
workerCount. Defaults to the number of CPU cores on the computer.
flushStaleTimeout- every this many milliseconds, scan for jobs that crashed while executing and moves them from the processing queue to the failed job queue. Defaults to 30000.
Starts the desired number of workers listening on the queue.
It's quite possible that you never want to call
start(). For example, the
case where you want a server to submit processing jobs, but not perform them.
You must register all the tasks you want to be able to perform before calling
modulePath is resolved just like a call to
The node module at
modulePath must export these options:
id- unique task ID that identifies what function to call to perform the task
perform(params, callback)- function which actually does the task.
params- the same params you gave to
submitJob, JSON encoded and then decoded.
callback(err)- call it when you're done processing. Until you call this callback, the resource is locked and no other processing job will touch it.
And it may export these optional options:
timeout- milliseconds since last heartbeat to wait before considering a job failed. defaults to
Adds a job to the queue to be processed by the next available worker.
idfield of a task you registered with
options- object containing any of these properties:
resourceId- a unique identifier for the resource you are about to process. Only one job will run at a time per resource.
params- an object which will get serialized to and from JSON and then passed to the
performfunction of a task.
retries- how many times to retry a job before moving it to the failed queue. Defaults to 0.
callback(err)- lets you know when the job made it onto the queue
Gracefully begins the shutdown process allowing any ongoing tasks to finish.
callback is called once everything is completly shut down.
Moves all jobs from the failed queue to the pending queue.
Deletes all jobs from the failed queue.
Forces an immediate flushing of jobs that crashed while executing. Jobs of this
sort are put into the failed queue, and you can then retry or delete them.
You probably don't want to call
forceFlushStaleJobs manually. It's mostly for