node package manager


> Stops you crashing into the rocks; lights the way


Stops you crashing into the rocks; lights the way

status: early. sorta working

npm install
npm link
# Start Chrome with a few flags 
npm run chrome
# Kick off a lighthouse run 
# see flags and options 
lighthouse --help

The same audits are run against from a Chrome extension. See ./extension.

Some basic unit tests forked are in /test and run via mocha. eslint is also checked for style violations.

# lint and test all files
npm test
## run linting and unit tests seprately
npm run lint
npm run unit

It's a moving target, but here's a recent attempt at capturing...

  • Driver - Interfaces with Chrome Debugging Protocol
  • Gathers - Requesting data from the browser (and maybe post-processing)
  • Artifacts - The output of gatherers
  • Audits - Non-performance evaluations of capabilities and issues. Includes a raw value and score of that value.
  • Metrics - Performance metrics summarizing the UX
  • Diagnoses - The perf problems that affect those metrics
  • Aggregators - Pulling audit results, grouping into user-facing components (eg. install_to_homescreen) and applying weighting and overall scoring.
  • Interacting with Chrome: The Chrome protocol connection maintained via chrome-remote-interface for the CLI and chrome.debuggger API when in the Chrome extension.
  • Event binding & domains: Some domains must be enable()d so they issue events. Once enabled, they flush any events that represent state. As such, network events will only issue after the domain is enabled. All the protocol agents resolve their Domain.enable() callback after they have flushed any pending events. See example:
// will NOT work 
driver.sendCommand('Security.enable').then(=> {
driver.on('Security.securityStateChanged', state => { /* ... */ });
// WILL work! happy happy. :) 
driver.on('Security.securityStateChanged', state => { /* ... */ }); // event binding is synchronous 
  • Reading the DOM: We prefer reading the DOM right from the browser (See #77). The driver exposes a querySelector method that can be used along with a getAttribute method to read values.

The return value of each audit takes this shape:

  name: 'audit-name',
  tags: ['what have you'],
  description: 'whatnot',
  // value: The score. Typically a boolean, but can be number 0-100 
  value: 0, 
  // rawValue: Could be anything, as long as it can easily be stringified and displayed,  
  //   e.g. 'your score is bad because you wrote ${rawValue}' 
  rawValue: {}, 
  // debugString: Some *specific* error string for helping the user figure out why they failed here.  
  //   The reporter can handle *general* feedback on how to fix, e.g. links to the docs 
  debugString: 'Your manifest 404ed' 
  // fault:  Optional argument when the audit doesn't cover whatever it is you're doing,  
  //   e.g. we can't parse your particular corner case out of a trace yet.  
  //   Whatever is in `rawValue` and `score` would be N/A in these cases 
  fault: 'some reason the audit has failed you, Anakin'

The .eslintrc defines all.

We're using JSDoc along with closure annotations. Annotations encouraged for all contributions.

const > let > var. Use const wherever possible. Save var for emergencies only.

The traceviewer-based trace processor from node-big-rig was forked into Lighthouse. Additionally, the DevTools' Timeline Model is available as well. There may be advantages for using one model over another.