Reduce your build times exponentially by using parallelism and concurrency on TravisCI / CircleCI / CodeShip / Locally / Anywhere!
Reduce your build times from hours to minutes on TravisCI, CircleCI, CodeShip and even locally.
Whirlwind takes set of tasks that you wish to distribute, such as slow end-to-end tests, and runs them across compute nodes (parallelism), as well as within compute nodes (concurrency).
The tasks source can be either a predefined list, or a directory.
You provide a set of tasks (files or strings) that you'd like to distribute. You configure the parallelism & concurrency parameters, and Whirlwind will distribute the list of tasks across the nodes and within them and run them all. If all of the tasks pass, you get a clean exit code. If any fail, you see the error and deal with it. That's it!
Yep. You can add pre-processors and post-processors to do some setup and finalizing. For example, if you're running end-to-end tests, you'll probably want to start a server first, or you may want to instrument all your files before running your tests. And when the tests finish, you may want to pick up all the reports and post them somewhere.
You can also configure other tweaks, such as batch vs single mode. For example, a tool like Cucumber would work better when it is provided with a batch of files to run, where as a tool that only takes a single paramater would work in single mode.
npm install -g whirlwind
The easiest way to use Whirlwind is to supply a
whirlwind.json file in your project, and then simply run:
Enjoy your build time being taken down from hours to minutes!
Let's go through the configuration file step-by-step:
First you define the total number of nodes and the current node id. You typically have these set as environment variables by CI servers, so you just have to let Whirlwins know what these are like this:
This is for CircleCI as you can see. You can also use numbers here directly if you like.
Next you define a process like this:
name A unique name for this process
parallelism States how many CI nodes / containers this process should be distributed over
processor.concurrency States how many processes to run on a node / container
processor.module The runner to use. The
exec-runner exposes node's
processor.source The tasks that will be disributed across and within nodes / containers. You can either use a
directory with a glob
pattern or you can also use a
list instead and provide an
['array', 'of', 'strings'].
processor.moduleOptions These options are used by the
exec-runner. Currently the
exec-runner is the only
available runner but we will soon add more and allow you to drop in your own modules. Notice the use of $TASKS. This is
where the tasks from the
source are used. If you don't specify a
separator, the files are passed to the script as a
space-separated flat array of files / strings.
processor.mode This can either be
"batch" (default) or
"single". In single mode, the process receives the
tasks one by one. In batch mode, the tasks are flattened into a set of paramters to pass to the executable module.
You may specify multiple processors in the configuration files. For example, you might have one processor defined with a parallelism of 5 and a concurrency of 1 for your end-to-end tests, and another processor for your integration tests that with a parallelism of 2 and a concurrency of 3. This means you'd be utilising 7 nodes / containers.
If you are going to run end-to-end tests on your app, you'll likely want to start a server first. For this you can use pre-processors like this: