The framework implements a flexible way to test and benchmark multiple candidates. Enabling meaningful A/B testing and measuring performance changes between different implementations and versions.
Also, it's very suitable to benchmark asynchronous executions since pFreak is based on Chromium DevTool's raw trace data. It calculates the execution duration of each unit of work instead of marking start-end based timing approach.
Basically, it provides a highly flexible and scalable framework to separate pre-execution setup, execution and assert function.
This was originally implemented as a part of CalDOM UI library development.
How it works?
Behind the scene, pFreak is using Puppeteer to automate the process and capture trace data through Chrome DevTools Protocol(CDP) session (same as Developer Tools Performance tab). The captured raw trace data is processed by devtools-timeline-model. Then the formatted results can be viewed on the browser.
Sample benchmark result preview
Sample test result preview
Each task is iterated X times and mean execution duration is taken.
Each iteration Steps:
- Open a new page ("empty_page.html")
- Slowdown CPU speed by ?X
- Load respective library through a script tag
- Load respective task through a script tag
- Start tracing (to capture Dev Tools' performance metrics)
- Run Task (Puppeteer's code injection execution time is excluded by scheduling the task using setTimeout before tracing is started.)
- Wait X seconds for all task operations to be completed. (This can be configured per task/test)
- Stop tracing
- Assert whether the task is completed.
- Close Page
The mean execution duration is in microseconds. (1000 Microseconds = 1 Millisecond).
Coefficient of variation is shown below the execution duration.
- This is how much iterations deviated from its mean execution duration. Lower deviation means the test is stable.
- Variation for smaller operations can be high. If that's the case, repeat the task equally for all candidates to increase the execution time.
- Refer _task_template.js for details.
Factor of slowness is compared against the base candidate. Eg:
- Vanilla JS execution duration = 400ms
- Candidate 1 execution duration= 600ms
- Candidate 1 is 1.5x slower than Vanilla JS
How to use?
npm install pfreak cd path/to/tests
1. Initiate pFreak. This creates & link all necessary file structure
Tip: Have a look at config.json & ./tasks/_task_template.js. Configure config.json if you want. _task_template.js is the base template to create new tests/tasks. You can modify this to suit your default template.
2. Create a new task/test
pfreak new-task --candidate candidate_name --category category_name --task task_name
This creates a new JS file in the ./tasks/ folder. Define your test/benchmark in the file using the given structure.
3. Run the benchmark or test-only mode
pfreak benchmark #or pfreak test
4. View Results
This starts an http-server at localhost:8080 and opens it.
Refer help for details
- Hope to expand this to Node based benchmark/tests as well (outside of the browser)
- Need a detailed documentation
- Same candidate, multiple library versions support (config.json)
How to contribute?
Your contributions are very welcome. I just created this as a side project to benchmark and test the CalDOM UI library I created. Figured that this could be useful for others as well. I don't have a grand plan for this yet, please feel free to jump in :)