Download website to a local directory (including all css, images, js, etc.)
Note: by default dynamic websites (where content is loaded by js) may be saved not correctly because
website-scraper doesn't execute js, it only parses http responses for html and css files. If you need to download dynamic website take a look on website-scraper-phantom.
npm install website-scraper
var scrape = ;var options =urls: ''directory: '/path/to/save/';// with promise;// or with callback;
- urls - urls to download, required
- directory - path to save files, required
- sources - selects which resources should be downloaded
- recursive - follow hyperlinks in html files
- maxRecursiveDepth - maximum depth for hyperlinks
- maxDepth - maximum depth for all dependencies
- request - custom options for for request
- subdirectories - subdirectories for file extensions
- defaultFilename - filename for index page
- prettifyUrls - prettify urls
- ignoreErrors - whether to ignore errors on resource downloading
- urlFilter - skip some urls
- filenameGenerator - generate filename for downloaded resource
- httpResponseHandler - customize http response handling
- resourceSaver - customize resources saving
- onResourceSaved - callback called when resource is saved
- onResourceError - callback called when resource's downloading is failed
- updateMissingSources - update url for missing sources with absolute url
- requestConcurrency - set maximum concurrent requests
Default options you can find in lib/config/defaults.js or get them using
Array of objects which contain urls to download and filenames for them. Required.
String, absolute path to directory where downloaded files will be saved. Directory should not exist. It will be created by scraper. Required.
Array of objects to download, specifies selectors and attribute values to select files for downloading. By default scraper tries to download all possible resources.
// Downloading images, css files and scripts;
true scraper will follow hyperlinks in html files. Don't forget to set
maxRecursiveDepth to avoid infinite downloading. Defaults to
Positive number, maximum allowed depth for hyperlinks. Other dependencies will be saved regardless of their depth. Defaults to
null - no maximum recursive depth set.
Positive number, maximum allowed depth for all dependencies. Defaults to
null - no maximum depth set.
Object, custom options for request. Allows to set cookies, userAgent, etc.
Array of objects, specifies subdirectories for file extensions. If
null all files will be saved to
/* Separate files into directories:- `img` for .jpg, .png, .svg (full path `/path/to/save/img`)- `js` for .js (full path `/path/to/save/js`)- `css` for .css (full path `/path/to/save/css`)*/;
String, filename for index page. Defaults to
Boolean, whether urls should be 'prettified', by having the
defaultFilename removed. Defaults to
true scraper will continue downloading resources after error occurred, if
false - scraper will finish process and return error. Defaults to
Function which is called for each url to check whether it should be scraped. Defaults to
null - no url filter will be applied.
// Links to other websites are filtered out by the urlFiltervar scrape = ;;
String (name of the bundled filenameGenerator) or function. Filename generator determines path in file system where the resource will be saved.
byType filenameGenerator is used the downloaded files are saved by extension (as defined by the
subdirectories setting) or directly in the
directory folder, if no subdirectory is specified for the specific extension.
bySiteStructure filenameGenerator is used the downloaded files are saved in
directory using same structure as on the website:
var scrape = ;;
Custom function which generates filename. It takes 3 arguments: resource - Resource object, options - object passed to scrape function, occupiedFileNames - array of occupied filenames. Should return string - relative to
directory path for specified resource.
const scrape = ;const crypto = ;;
Function which is called on each response, allows to customize resource or reject its downloading.
It takes 1 argument - response object of request module and should return resolved
Promise if resource should be downloaded or rejected with Error
Promise if it should be skipped.
Promise should be resolved with:
stringwhich contains response body
- or object with properies
body(response body, string) and
metadata- everything you want to save for this resource (like headers, original text, timestamps, etc.), scraper will not use this field at all, it is only for result.
// Rejecting resources with 404 status and adding metadata to other resources;
Scrape function resolves with array of Resource objects which contain
metadata property from
Class which saves Resources, should have methods
errorCleanup which return Promises. Use it to save files where you need: to dropbox, amazon S3, existing directory, etc. By default all files are saved in local file system to new directory passed in
directory option (see lib/resource-saver/index.js).
Function called each time when resource is saved to file system. Callback is called with Resource object. Defaults to
null - no callback will be called.
Function called each time when resource's downloading/handling/saving to fs was failed. Callback is called with - Resource object and
Error object. Defaults to
null - no callback will be called.
true scraper will set absolute urls for all failing
false - it will leave them as is (which may cause incorrectly displayed page).
Also can contain array of
sources to update (structure is similar to sources).
// update all failing img srcs with absolute url;// download nothing, just update all img srcs with absolute urls;
Number, maximum amount of concurrent requests. Defaults to
Callback function, optional, includes following parameters:
error: if error -
Errorobject, if success -
result: if error -
null, if success - array of Resource objects containing:
url: url of loaded page
filename: filename where page was saved (relative to
children: array of children Resources
Log and debug
This module uses debug to log events. To enable logs you should use environment variable
Next command will log everything from website-scraper
export DEBUG=website-scraper*; node app.js
Module has different loggers for levels:
website-scraper:log. Please read debug documentation to find how to include/exclude specific loggers.