bda-research / node-webcrawler

Crawler is a web spider written with Nodejs. It gives you the full power of jQuery on the server to parse a big number of pages as they are downloaded, asynchronously
MIT License
41 stars 7 forks source link

build status This repo is merged with node-crawler, please use it instead.

Features:

Here is the CHANGELOG

Help & Forks welcomed!

How to install

$ npm install node-webcrawler

Crash course

var Crawler = require("node-webcrawler");
var url = require('url');

var c = new Crawler({
    maxConnections : 10,
    // This will be called for each crawled page
    callback : function (error, result, $) {
        // $ is Cheerio by default
        //a lean implementation of core jQuery designed specifically for the server
        if(error){
            console.log(error);
        }else{
            console.log($("title").text());
        }
    }
});

// Queue just one URL, with default callback
c.queue('http://www.amazon.com');

// Queue a list of URLs
c.queue(['http://www.google.com/','http://www.yahoo.com']);

// Queue URLs with custom callbacks & parameters
c.queue([{
    uri: 'http://parishackers.org/',
    jQuery: false,

    // The global callback won't be called
    callback: function (error, result) {
        if(error){
            console.log(error);
        }else{
            console.log('Grabbed', result.body.length, 'bytes');
        }
    }
}]);

// Queue some HTML code directly without grabbing (mostly for tests)
c.queue([{
    html: '<p>This is a <strong>test</strong></p>'
}]);

Work with bottleneck

Control rate limits for each connection, usually used with proxy.

var Crawler = require("node-webcrawler");

var c = new Crawler({
    maxConnections : 3,
    rateLimits:2000,
    callback : function (error, result, $) {
        if(error){
            console.error(error);
        }else{
            console.log($('title').text());
        }
    }
});

c.queue({
    uri:"http://www.google.com",
    limiter:"key1",// for connection of 'key1'
    proxy:"http://user:pass@127.0.0.1:8080"
});

c.queue({
    uri:"http://www.google.com",
    limiter:"key2", // for connection of 'key2'
    proxy:"http://user:pass@127.0.0.1:8082"
});

c.queue({
    uri:"http://www.google.com",
    limiter:"key3", // for connection of 'key3'
    proxy:"http://user:pass@127.0.0.1:8081"
});

Options reference

You can pass these options to the Crawler() constructor if you want them to be global or as items in the queue() calls if you want them to be specific to that item (overwriting global options)

This options list is a strict superset of mikeal's request options and will be directly passed to the request() method.

Basic request options:

Callbacks:

Pool options:

Retry options:

Server-side DOM options:

Charset encoding:

Cache:

Other:

Class:Crawler

Instance of Crawler

crawler.queue(uri|options)

crawler.queueSize

Size of queue, read-only

Working with Cheerio or JSDOM

Crawler by default use Cheerio instead of Jsdom. Jsdom is more robust but can be hard to install (espacially on windows) because of contextify. Which is why, if you want to use jsdom you will have to build it, and require('jsdom') in your own script before passing it to crawler. This is to avoid cheerio crawler user to build jsdom when installing crawler.

Working with Cheerio

jQuery: true //(default)
//OR
jQuery: 'cheerio'
//OR
jQuery: {
    name: 'cheerio',
    options: {
        normalizeWhitespace: true,
        xmlMode: true
    }
}

These parsing options are taken directly from htmlparser2, therefore any options that can be used in htmlparser2 are valid in cheerio as well. The default options are:

{
    normalizeWhitespace: false,
    xmlMode: false,
    decodeEntities: true
}

For a full list of options and their effects, see this and htmlparser2's options. source

Working with JSDOM

In order to work with JSDOM you will have to install it in your project folder npm install jsdom, deal with compiling C++ and pass it to crawler.

var jsdom = require('jsdom');
var Crawler = require('node-webcrawler');

var c = new Crawler({
    jQuery: jsdom
});

How to test

Install and run Httpbin

node-webcrawler use a local httpbin for testing purpose. You can install httpbin as a library from PyPI and run it as a WSGI app. For example, using Gunicorn:

$ pip install httpbin
// launch httpbin as a daemon with 6 worker on localhost
$ gunicorn httpbin:app -b 127.0.0.1:8000 -w 6 --daemon

// Finally
$ npm install && npm test

Alternative: Docker

After installing Docker, you can run:

// Builds the local test environment
$ docker build -t node-webcrawler .

// Runs tests
$ docker run node-webcrawler sh -c "gunicorn httpbin:app -b 127.0.0.1:8000 -w 6 --daemon && npm install && npm test"

// You can also ssh into the container for easier debugging
$ docker run -i -t node-webcrawler bash

build status

Rough todolist

ChangeLog

See https://github.com/bda-research/node-webcrawler/releases