Benchpress allows creation and sampling of macro benchmarks to compare performance of real world web applications. The project is built and maintained by the Angular team, and is used to test performance of AngularJS and AngularDart, but is not limited to testing with these frameworks.
$ npm install -g angular-benchpress
Expect frequent breaking changes.
Starting in a project's web app's directory:
benchpress buildto generate the combined benchmark runner in "benchpress-build/" within the web app
The benchpress library adds an array to the global
bp object called "steps," which is where
a benchmark should push benchmark configuration objects. The object should contain a
is what the benchmark shows up as in the report, and a
fn, which is the function that gets
evaluated and timed.
Benchpress also exposes an API to manage variables of a test run, useful for comparing test runs
under different code conditions. This API is exposed on
bp.Variables, and has the following
methods and properties:
A variable should be an object with at least a value property, which is a string. Other properties may be added.
Here's how an AngularJS benchmark would incorporate Benchpress variables:
$scope;bpVariables;$scopevariableStates = bpVariablesvariables;ctrlbenchmarkType = bpVariablesselected? bpVariablesselectedvalue : undefined;
See the example in
benchmarks/largetable for full reference.
Variables are optional, and are a no-op as far as benchpress is concerned. Benchpress relies on the benchmark code to read and manipulate variable state to change the actual execution of the steps under test. Benchpress provides this API since mosts tests implement variables of some sort, and Benchpress would have a hard time running tests programmatically with variables without some notion of variables.
The default variable to be executed can be provided in the search string of the url using the
"variable" parameter name, ie
There is one variable state set for all steps at any given time.
There are no sophisticated mechanisms for preparing or cleaning up after tests (yet). A benchmark should add a step before or after the real test in order to do test setup or cleanup. All steps will show up in reports.
Each benchmark directory should contain a file named
bp.conf.js, which tells benchpress
how to prepare the benchmark at build-time.
Example benchpress config:
The CLI has three commands:
$ benchpress build --build-path=optional/path$ benchpress run --build-path=optional/path //Starts serving cwd at :3339. Will redirect '/' to build-path$ benchpress launch_chrome //Launches Chrome Canary as described below
For Mac and Linux computers, a utility script is included to launch Chrome Canary with special flags to allow manual garbage collection, as well as high resolution memory reporting. Unless Chrome Canary is used, these features are not available, and reports will be lacking information. Samples will also have more outliers with more expensive test runs because garbage collection timing is left up to the VM.
This launches Chrome Canary in Incognito Mode for more accurate testing.
$ benchpress launch_chrome
After opening the benchmark in the browser as described in Creating Benchmarks, the test execution may be configured in two ways:
The number of samples tells benchpress "analyze the most recent n samples for reporting." If the number of samples is 20, and a user runs a loop 99 times, the last 20 samples are the only ones that are calculated in the reports. This value is controlled by a text input at the top of the screen, which is set to 20 by default.
The number of times a test cycle executes is set by pressing the button representing how many cycles should be performed. Options are: