npm install huntsman --save
/** Crawl wikipedia and use jquery syntax to extract information from the page **/var huntsman = ;var spider = huntsman;spiderextensions =huntsman // load recurse extension & follow anchor linkshuntsman // load cheerio extension;// follow pages which match this uri regexspider;spiderqueue;spiderstart;
peter@edgy:/tmp$ node examples/html.js... etc
More examples are available in the /examples directory
How it works
Huntsman takes one or more 'seed' urls with the
Once the process is kicked off with
spider.start(), it will take care of extracting links from the page and following only the pages we want.
To define which pages are crawled use the
spider.on() function with a string or regular expression.
Each page will only be crawled once. If multiple regular expressions match the uri, they will all be called.
Page URLs which do not match an
on condition will never be crawled
The spider has default settings, you can override them by passing a settings object when you create a spider.
// use default settingsvar huntsman = ;var spider = huntsman;
// override default settingsvar huntsman = ;var spider = huntsman;
Crawling a site
How you configure your spider will vary from site to site, generally you will only be looking for for pages with a specific url format.
Scrape product information from amazon
In this example we can see that amazon product uris all seem to share the format
After queueing the seed uri
http://www.amazon.co.uk/ huntsman will follow all the product pages it finds recursively.
/** Example of scraping products from the amazon website **/var huntsman = ;var spider = huntsman;spiderextensions =huntsman // load recurse extension & follow anchor linkshuntsman // load cheerio extension;// target only product urisspider;spiderqueue;spiderstart;
Find pets for sale on craigslist in london
More complex crawls may require you to specify hub pages to follow before you can get to the content you really want. You can add an
on event without a callback & huntsman will still follow and extract links from it.
/** Example of scraping information about pets for sale on cragslist in london **/var huntsman = ;var spider = huntsman;spiderextensions =huntsman // load recurse extension & follow anchor linkshuntsman // load cheerio extensionhuntsman // load stats extension;// target only pet urisspider;// hub pagesspider;spider;spiderqueue;spiderstart;
Extensions have default settings, you can override them by passing an optional second argument when the extension is loaded.
// loading an extensionspiderextensions =huntsman;
This extension extracts links from html pages and then adds them to the queue.
The default patterns only target anchor tags which use the http protocol, you can change any of the default patterns by declaring them when the extension is loaded.
// default patternshuntsman
searchmust be a
globalregexp and is used to target the links we want to extract.
refineis a regexp used to extract the bits we want from the
filteris a regexp that must match or links are discarded.
// extract both anchor tags and script tagshuntsman
// ignore query segment of uris (exclude everything from '?' onwards)huntsman
// avoid some file extensionshuntsman
// avoid all uris with three letter file extensionshuntsman
// stay on one domainhuntsman
recurse converts relative urls to absolute urls and strips fragment identifiers and trailing slashes.
If you need even more control you can override the
normaliser functions to modify these behaviours.
This extension parses html and provides jquery-style selectors & functions.
// default settingshuntsman
res.extension.cheerio function is available in your
on callbacks when the response body is HTML.
cheerio reference: https://github.com/MatthewMueller/cheerio
This extension parses the response body with
// enable jsonhuntsman
res.extension.json function is available in your
on callbacks when the response body is json.
This extension extracts links from html pages and returns the result.
It exposes the same functionality that the
recurse extension uses to extract links.
// enable extensionhuntsman
res.extension.links function is available in your
on callbacks when the response body is a string.
This extension displays statistics about pages crawled, error counts etc.
// default settingshuntsman
Custom queues and response storage adapters
I'm currently working on being able to persist the job queue via something like redis and potentially caching http responses in mongo with a TTL.
If you live life on the wild side, these adapters can be configured when you create a spider.
Pull requests welcome.
(The MIT License)
Copyright (c) 2013 Peter Johnson <@insertcoffee>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.