npm

Need private packages and team management tools?Check out npm Orgs. »

scrape-pages

3.4.2 • Public • Published

scrape-pages

A generalized scraper library using JSON based instructions. It focuses on readability and reusability with a tiny api footprint.

npm node Github Actions Build Status license Coverage Status

:warning: This project is under active development. Expect bugs and frequent api changes.

See the github issues page for an overview of ongoing work.

Installation

npm install scrape-pages

Usage

Lets download the ten most recent images from NASA's image of the day archive.

// First, define a `config`, `options`, and `params` to be passed into the scraper.
const config = {
  flow: [
    {
      name: 'index',
      download: 'https://apod.nasa.gov/apod/archivepix.html',
      parse: {
        selector: 'body > b > a',
        attribute: 'href'
        limit: 10
      },
    },
    {
      name: 'post',
      download: 'https://apod.nasa.gov/apod/{{ value }}',
      parse: {
        selector: 'img[src^="image"]',
        attribute: 'src'
      }
    },
    {
      name: 'image',
      download: 'https://apod.nasa.gov/apod/{{ value }}'
    }
  ]
}
 
const options = {
  logLevel: 'info',
  optionsEach: {
    image: {
      read: false,
      write: true
    }
  }
}
// params are separated from config & options so params can change while reusing configs & options.
const params = {
  folder: './downloads'
}
 
// Outside of defining configuration objects, the api is very simple. You have the ability to:
// - start the scraper
// - listen for events from the scraper
// - emit events back to the scraper (like 'stop')
// - query the scraped data
 
const { scraper } = require('scrape-pages')
// create an executable scraper and a querier
const { start, query } = scrape(config, options, params)
// begin scraping here
const { on, emit } = await start()
// listen to events
on('image:compete', id => console.log('COMPLETED image', id))
on('done', () => {
  const result = query({ scrapers: ['images'] })
  // result is [[{ filename: 'img1.jpg' }, { filename: 'img2.jpg' }, ...]]
})

For more real world examples, visit the examples directory

Playground

A playground exists at https://scrape-pages.js.org to help visualize scraper flows. It is also a useful way to share a config object with others.

Documentation

The compiled scraper created from a config object is meant to be reusable. You may choose to tweak the cache settings on the options object to run the scraper multiple times and only re-download certain parts. If given a different output folder in the params object, it will run completely fresh.

scrape

argument type required type file description
config ConfigInit Yes src/settings/config/types.ts what is being downloaded
options OptionsInit Yes src/settings/options/types.ts how something is downloaded
params ParamsInit Yes src/settings/params/types.ts who is being downloaded

scraper

The scrape function returns a promise which yields these utilities (on, emit, and query)

on

Listen for events from the scraper

event callback arguments description
'done' when the scraper has completed
'error' Error if the scraper encounters an error
'<scraper>:progress' download id emits progress of download until completed
'<scraper>:queued' download id when a download is queued
'<scraper>:complete' download id when a download is completed

emit

While the scraper is working, you can affect its behavior by emitting these events:

event arguments description
'useRateLimiter' boolean turn on or off the rate limit defined in the run options
'stop' stop the crawler (note that in progress requests will still complete)
'stop:<scraper>' stop a specific scraper from accepting new downloads. This is useful when you want to control how many downloads a more complex run structure should make.

query

The query function allows you to get scraped data out of a progress whenever you want after start() has been called. Note that query() is a convenience wrapper around query.prepare()(). Use the latter to achieve faster queries, as the former will re-build your sqlite statements each time it is called! These are its arguments:

name type required description
scrapers string[] Yes scrapers who will return their filenames and parsed values, in order
groupBy string Yes name of a scraper which will delineate the values in scrapers

Motivation

The pattern to download data from a website is largely similar. It can be summed up like so:

  • get a page from a url
    • scrape the page for more urls
      • get a page
        • get some text or media from page

What varies is how much nested url grabbing is required and in which steps data is saved. This project is an attempt to generalize that process into a single static config file.

Describing a site crawler with a single config enforces structure, and familiarity that is less common with other scraping libraries. Not only does this make yours surface api much more condensed, and immediately recognizable, it also opens the door to sharing and collaboration, since passing json objects around the web is safer than executable code. Hopefully, this means that users can agree on common configs for different sites, and in time, begin to contribute common scraping patterns.

Generally, if you could scrape the page without executing javascript in a headless browser, this package should be able to scrape what you wish. However, it is important to note that if you are doing high volume production level scraping, it is always better to write your own scraper code.

install

npm i scrape-pages

Downloadsweekly downloads

12

version

3.4.2

license

MIT

homepage

github.com

repository

Gitgithub

last publish

collaborators

  • avatar
Report a vulnerability