Zebrunner Mocha reporting agent
Inclusion into your project
Adding dependency
First, you need to add the Zebrunner Agent into your package.json
.
=== "Yarn"
```shell
yarn add @zebrunner/javascript-agent-mocha
```
=== "NPM"
```shell
npm install @zebrunner/javascript-agent-mocha
```
Reporter setup
The agent does not work automatically after adding it into the project, it requires extra configuration using any of supportable Mocha configuration files. If no custom path to configuration file was given or there are multiple configuration files in the same directory, Mocha will search and use only one. The priority is:
- .mocharc.js
- .mocharc.yaml
- .mocharc.yml
- .mocharc.jsonc
- .mocharc.json
Navigate to any of Mocha configuration files above and provide following information:
- add
@zebrunner/javascript-agent-mocha
as a reporter; - provide valid configuration of Zebrunner workspace for
reporterConfig
property. You can find description of all properties in the Reporter configuration section.
For example:
.mocharc.js
module.exports = {
reporter: '@zebrunner/javascript-agent-mocha',
// Zebrunner reporter configuration
reporterConfig: {
// reporter configuration
},
spec: ['test/**/*.js'],
parallel: true,
};
.mocharc.json
{
"reporter": "@zebrunner/javascript-agent-mocha",
// Zebrunner reporter configuration
"reporterConfig": {
// reporter configuration
},
"spec": "test/**/*.js",
"parallel": true
}
Reporter configuration
Once the agent is added into your project, it is not automatically enabled. The valid configuration must be provided first.
It is currently possible to provide the configuration via:
- Environment variables
- Mocha configuration files (i.e.
.mocharc.json
)
The configuration lookup will be performed in the order listed above, meaning that environment configuration will always take precedence over Mocha configuration file. As a result, it is possible to override configuration parameters by passing them through a configuration mechanism with higher precedence.
Configuration options
The following subsections contain tables with configuration options. The first column in these tables contains the name of the option. It is represented as an environment variable (the first value) and as a reporter config property from Mocha configuration file (i.e. .mocharc.json
as the second value). The second column contains description of the configuration option.
Common configuration
Env var / Reporter config | Description |
---|---|
REPORTING_ENABLED enabled
|
Enables or disables reporting. The default value is false . |
REPORTING_PROJECT_KEY projectKey
|
Optional value. It is the key of Zebrunner project that the launch belongs to. The default value is DEF . |
REPORTING_SERVER_HOSTNAME server.hostname
|
Mandatory if reporting is enabled. It is your Zebrunner hostname, e.g. https://mycompany.zebrunner.com . |
REPORTING_SERVER_ACCESS_TOKEN server.accessToken
|
Mandatory if reporting is enabled. The access token is used to perform API calls. It can be obtained in Zebrunner on the 'Account and profile' page in the 'API Access' section. |
Automation launch configuration
The following configuration options allow you to configure accompanying information that will be displayed in Zebrunner for the automation launch.
Env var / Reporter config | Description |
---|---|
REPORTING_LAUNCH_DISPLAY_NAME launch.displayName
|
Display name of the launch in Zebrunner. The default value is Default Suite . |
REPORTING_LAUNCH_BUILD launch.build
|
Build number associated with the launch. It can reflect either the test build number or the build number of the application under test. |
REPORTING_LAUNCH_ENVIRONMENT launch.environment
|
Represents the target environment in which the tests were run. For example, stage or prod . |
REPORTING_LAUNCH_LOCALE launch.locale
|
Locale that will be displayed for the automation launch in Zebrunner. For example, en_US . |
REPORTING_LAUNCH_TREAT_SKIPS_AS_FAILURES launch.treatSkipsAsFailures
|
If the value is set to true , skipped tests will be treated as failures when the result of the entire launch is calculated, otherwise skipped tests will be considered as passed. The default value is true . |
<N/A> launch.labels
|
Object with labels to be attached to the current launch. Property name is the label key, property value is the label value. Label value must be a string. |
<N/A> launch.artifactReferences
|
Object with artifact references to be attached to the current launch. Property name is the artifact reference name, property value is the artifact reference value. Value must be a string. |
Milestone
Zebrunner Milestone for the automation launch can be configured using the following configuration options (all of them are optional).
Env var / Reporter config | Description |
---|---|
REPORTING_MILESTONE_ID milestone.id
|
Id of the Zebrunner Milestone to link the automation launch to. The id is not displayed on Zebrunner UI, so the field is basically used for internal purposes. If the milestone does not exist, the launch will continue executing. |
REPORTING_MILESTONE_NAME milestone.name
|
Name of the Zebrunner Milestone to link the automation launch to. If the milestone does not exist, the appropriate warning message will be displayed in logs, but the test suite will continue executing. |
Notifications
Zebrunner provides notification capabilities for automation launch results. The following options configure notification rules and targets.
Env var / Reporter config | Description |
---|---|
REPORTING_NOTIFICATION_NOTIFY_ON_EACH_FAILURE notifications.notifyOnEachFailure
|
Specifies whether Zebrunner should send notifications to Slack/Teams on each test failure. The notifications will be sent even if the launch is still running. The default value is false . |
REPORTING_NOTIFICATION_SLACK_CHANNELS notifications.slackChannels
|
A comma-separated list of Slack channels to send notifications to. Notifications will be sent only if the Slack integration is properly configured in Zebrunner with valid credentials for the project the launch is reported to. Zebrunner can send two types of notifications: on each test failure (if the appropriate property is enabled) and on the launch finish. |
REPORTING_NOTIFICATION_MS_TEAMS_CHANNELS notifications.teamsChannels
|
A comma-separated list of Microsoft Teams channels to send notifications to. Notifications will be sent only if the Teams integration is configured in the Zebrunner project with valid webhooks for the channels. Zebrunner can send two types of notifications: on each test failure (if the appropriate property is enabled) and on the launch finish. |
REPORTING_NOTIFICATION_EMAILS notifications.emails
|
A comma-separated list of emails to send notifications to. This type of notifications does not require further configuration on Zebrunner side. Unlike other notification mechanisms, Zebrunner can send emails only on the launch finish. |
Examples
Environment Variables
The following code snippet is a list of all configuration environment variables from .env
file:
REPORTING_ENABLED=true
REPORTING_PROJECT_KEY=DEF
REPORTING_SERVER_HOSTNAME=https://mycompany.zebrunner.com
REPORTING_SERVER_ACCESS_TOKEN=somesecretaccesstoken
REPORTING_LAUNCH_DISPLAY_NAME=Nightly Regression
REPORTING_LAUNCH_BUILD=2.41.2.2431-SNAPSHOT
REPORTING_LAUNCH_ENVIRONMENT=QA
REPORTING_LAUNCH_LOCALE=en_US
REPORTING_LAUNCH_TREAT_SKIPS_AS_FAILURES=true
REPORTING_MILESTONE_ID=1
REPORTING_MILESTONE_NAME=Release 1.0.0
REPORTING_NOTIFICATION_NOTIFY_ON_EACH_FAILURE=false
REPORTING_NOTIFICATION_SLACK_CHANNELS=dev, qa
REPORTING_NOTIFICATION_MS_TEAMS_CHANNELS=dev-channel, management
REPORTING_NOTIFICATION_EMAILS=manager@mycompany.com
Configuration file
Below you can see an example of the full configuration provided via .mocharc.json
file:
{
"reporter": "@zebrunner/javascript-agent-mocha",
"reporterConfig": {
"enabled": true,
"projectKey": "DEF",
"server": {
"hostname": "https://mycompany.zebrunner.com",
"accessToken": "somesecretaccesstoken"
},
"launch": {
"displayName": "Nightly Regression",
"build": "2.41.2.2431-SNAPSHOT",
"environment": "LOCAL",
"locale": "en_US",
"treatSkipsAsFailures": true,
"labels": {
"runner": "Alice",
"reviewer": "Bob"
},
"artifactReferences": {
"landing": "https://zebrunner.com"
}
},
"milestone": {
"id": 1,
"name": "Release 1.0.0"
},
"notifications": {
"notifyOnEachFailure": false,
"slackChannels": "dev, qa",
"teamsChannels": "dev-channel, management",
"emails": "manager@mycompany.com"
}
},
"spec": "test/**/*.js",
"parallel": true,
"require": "test/rootHooks.js", // path to file with Root Hooks
}
Collecting test logs
The Zebrunner Agent collects logs produced by Pino logger. To capture logging messages and send them to Zebrunner, it is necessary to include additional configuration:
- Create a new
logger.js
file (or open your existingPino
configuration file) and update it with additional transport to send logs to Zebrunner:
const pino = require('pino');
const transport = pino.transport({
targets: [
{
target: 'pino/file', // logs to the standard output by default
},
{
target: "@zebrunner/javascript-agent-mocha/pino-transport",
},
// other transports
],
});
module.exports = pino(transport);
- Make sure your tests are using Pino logger with configuration provided in
logger.js
file from the step above.
const logger = require('./logger');
describe('Sample suite', () => {
it('sample test', function () {
logger.info('some logging message from the test');
// ...
});
});
- For correct separation logging messages by tests executed in serial or in parallel modes, necessary to define Root Hooks:
- create
rootHooks.js
file (or update existing one if you already have it) and add following configuration forbeforeEach
andafterEach
root hooks. Please note thatlogger
object should use the same configuration provided in the file created in step #1 above. It is recommended to copy a whole code snippet below to avoid mistakes in configuration.
const logger = require('./logger');
const { CurrentTest } = require("@zebrunner/javascript-agent-mocha");
exports.mochaHooks = () => ({
beforeEach: [
function beforeEachRoot() {
CurrentTest.initLogging(this.currentTest);
logger.info('EVENT_TEST_BEGIN');
},
],
afterEach: [
function afterEachRoot() {
logger.info('EVENT_TEST_END');
},
],
});
- include file with Root Hooks
rootHooks.js
into Mocha configuration file (i.e..mocharc.json
):
.mocharc.json
{
"reporter": "@zebrunner/javascript-agent-mocha",
// Zebrunner reporter configuration
"reporterConfig": {
// reporter configuration
},
"spec": "test/**/*.js",
"parallel": true,
"require": "test/rootHooks.js", // path to created file with Root Hooks
}
Please note: it is necessary to have
pid
parameter in Pino logging messages for parallel execution of the tests. By default, it should be present according to the default shape. If it's not present, necessary to adjust the Pino configuration file in following way (in this case it's located inlogger.js
):
const pino = require('pino');
const transport = pino.transport({
targets: [
{
target: 'pino/file', // logs to the standard output by default
},
{
target: "@zebrunner/javascript-agent-mocha/pino-transport",
},
// other transports
],
});
module.exports = pino({
// adding 'pid' parameter to each Pino log messages for parallel execution
formatters: {
bindings(bindings) {
return { pid: bindings.pid }
},
},
}, transport);
Tracking test maintainer
You may want to add transparency to the process of automation maintenance by having an engineer responsible for evolution of specific tests or test suites. To serve that purpose, Zebrunner comes with a concept of a maintainer.
In order to keep track of those, the Agent comes with the #setMaintainer()
method of the CurrentTest
object. This method accepts the username of an existing Zebrunner user. If there is no user with the given username, anonymous
will be assigned.
Please note: it is necessary to share Mocha's context to use this feature:
this.currentTest
forbeforeEach()
hook orthis.test
- forit()
tests. Moreover classicfunction()
should be defined instead of arrow functions - lambdas lexically bindthis
and cannot access the Mocha context.
const { CurrentTest } = require("@zebrunner/javascript-agent-mocha");
describe('Sample suite', () => {
beforeEach(function () {
CurrentTest.setMaintainer(this.currentTest, 'developer'); // will be set for all tests from the file
});
it('first test', function () {
CurrentTest.setMaintainer(this.test, 'tester'); // will be set only for this test
// ...
});
it('second test', function () {
// ...
});
});
In this example, developer
will be reported as a maintainer of second test
(because the value is set in beforeEach()
), while tester
will be reported as a maintainer of the first test
(overrides value set in beforeEach()
).
Attaching labels to test and launch
In some cases, it may be useful to attach meta information related to a test or the entire launch.
The agent comes with a concept of labels. Label is a simple key-value pair. The label key is represented by a string, the label value accepts a vararg of strings.
To attach a label to a test, you need to invoke the #attachLabel()
method of the CurrentTest
object in scope of the test method. Also, you can use this method in beforeEach()
that means such labels will be assigned for all test cases from the file.
To attach label to the entire launch, you can either invoke the attachLabel
method of the CurrentLaunch
object or provide the labels in the configuration file.
Please note: it is necessary to share Mocha's context to attach labels to a test:
this.currentTest
forbeforeEach()
hook orthis.test
- forit()
tests. Moreover classicfunction()
should be defined instead of arrow functions - lambdas lexically bindthis
and cannot access the Mocha context.
const { CurrentLaunch, CurrentTest } = require("@zebrunner/javascript-agent-mocha");
describe('Sample suite', () => {
before(function () {
CurrentLaunch.attachLabel('feature', 'api');
});
beforeEach(function () {
CurrentTest.attachLabel(this.currentTest, 'test_label', 'before_1', 'before_2');
});
it('sample test', function () {
CurrentLaunch.attachLabel('suite', 'regression', 'smoke');
CurrentTest.attachLabel(this.test, 'owner', 'developer');
});
});
Attaching artifact references to test and launch
Labels are not the only option for attaching meta information to a test and launch. If the information you want to attach is a link (to a file or webpage), it is more useful to attach it as an artifact reference (or to put it simply as a link).
The #attachArtifactReference()
methods of the CurrentTest
and CurrentLaunch
objects serve exactly this purpose. These methods accept two arguments. The first one is the artifact reference name which will be shown in Zebrunner. The second one is the artifact reference value.
Also, you can use CurrentTest.attachArtifactReference()
in beforeEach()
hook that means a reference will be assigned for all the test cases from the file.
Moreover, you can attach artifact references to the entire launch by specifying them in the configuration file.
Please note: it is necessary to share Mocha's context to attach artifact references to a test:
this.currentTest
forbeforeEach()
hook orthis.test
- forit()
tests. Moreover classicfunction()
should be defined instead of arrow functions - lambdas lexically bindthis
and cannot access the Mocha context.
const { CurrentLaunch, CurrentTest } = require("@zebrunner/javascript-agent-mocha");
describe('Sample suite', () => {
before(function () {
CurrentLaunch.attachArtifactReference('documentation', 'https://zebrunner.com/documentation/');
});
beforeEach(function () {
CurrentTest.attachArtifactReference(this.currentTest, 'mocha', 'https://mochajs.org/');
});
it('sample test', function () {
CurrentLaunch.attachArtifactReference('github', 'https://github.com/zebrunner');
CurrentTest.attachArtifactReference(this.test, 'mocha_github', 'https://github.com/zebrunner/javascript-agent-mocha');
});
});
Attaching artifacts to test and launch
In case your tests or the entire launch produce some artifacts, it may be useful to track them in Zebrunner. The agent comes with a few convenient methods for uploading artifacts in Zebrunner and linking them to the currently running test or the launch.
The #uploadArtifactBuffer()
and #uploadArtifactFromFile()
methods of the CurrentTest
and CurrentLaunch
objects serve exactly this purpose.
Also, you can use methods mentioned above from CurrentTest
in beforeEach()
hook that means the artifact will be assigned for all test cases from the file.
Please note: it is necessary to share Mocha's context to attach artifacts to a test:
this.currentTest
forbeforeEach()
hook orthis.test
- forit()
tests. Moreover classicfunction()
should be defined instead of arrow functions - lambdas lexically bindthis
and cannot access the Mocha context.
const fs = require('fs');
const { CurrentLaunch, CurrentTest } = require("@zebrunner/javascript-agent-mocha");
describe('Sample suite', () => {
before(function () {
CurrentLaunch.uploadArtifactFromFile('index.js', './index.js');
});
beforeEach(function () {
CurrentTest.uploadArtifactFromFile(this.currentTest, 'index.js', './index.js');
});
it('sample test', function () {
const launchBuffer = fs.readFileSync('./.prettierrc.json');
CurrentLaunch.uploadArtifactBuffer('simple_json', 'application/json', launchBuffer);
const testBuffer = fs.readFileSync('./package.json');
CurrentTest.uploadArtifactBuffer(this.test, 'package.json', 'application/json', testBuffer);
});
});
Reverting test registration
In some cases, it might be handy not to register test executions in Zebrunner. This may be caused by very special circumstances of a test environment or execution conditions.
Zebrunner Agent comes with a convenient method #revertRegistration()
of the CurrentTest
object for reverting test registration at runtime. The following code snippet shows a case where a test is not reported on Monday.
Please note: it is necessary to share Mocha's context to use this feature:
this.test
insideit()
tests. Moreover classicfunction()
should be defined instead of arrow functions - lambdas lexically bindthis
and cannot access the Mocha context.
const { CurrentTest } = require("@zebrunner/javascript-agent-mocha");
describe('Sample suite', () => {
it('sample test', function () {
if (new Date().getDay() === 1) {
CurrentTest.revertRegistration(this.test);
}
});
});
It is worth mentioning that the method invocation does not affect the test execution, but simply unregisters the test in Zebrunner. To interrupt the test execution, you need to perform additional actions, for example, throw an Error.