elacuesta / scrapy-pyppeteer

Pyppeteer integration for Scrapy
BSD 3-Clause "New" or "Revised" License
60 stars 13 forks source link
headless-browser javascript-renderer pyppeteer python python-asyncio python3 scrapy

Unmaintained

If you need browser integration for Scrapy, please consider using scrapy-playwright


Pyppeteer integration for Scrapy

version pyversions actions codecov

This project provides a Scrapy Download Handler which performs requests using Pyppeteer. It can be used to handle pages that require JavaScript. This package does not interfere with regular Scrapy workflows such as request scheduling or item processing.

Motivation

After the release of version 2.0, which includes partial coroutine syntax support and experimental asyncio support, Scrapy allows to integrate asyncio-based projects such as Pyppeteer.

Requirements

Installation

$ pip install scrapy-pyppeteer

Configuration

Replace the default http and https Download Handlers through DOWNLOAD_HANDLERS:

DOWNLOAD_HANDLERS = {
    "http": "scrapy_pyppeteer.handler.ScrapyPyppeteerDownloadHandler",
    "https": "scrapy_pyppeteer.handler.ScrapyPyppeteerDownloadHandler",
}

Note that the ScrapyPyppeteerDownloadHandler class inherits from the default http/https handler, and it will only use Pyppeteer for requests that are explicitly marked (see the "Basic usage" section for details).

Also, be sure to install the asyncio-based Twisted reactor:

TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"

scrapy-pyppeteer accepts the following settings:

Basic usage

Set the pyppeteer Request.meta key to download a request using Pyppeteer:

import scrapy

class AwesomeSpider(scrapy.Spider):
    name = "awesome"

    def start_requests(self):
        # GET request
        yield scrapy.Request("https://httpbin.org/get", meta={"pyppeteer": True})
        # POST request
        yield scrapy.FormRequest(
            url="https://httpbin.org/post",
            formdata={"foo": "bar"},
            meta={"pyppeteer": True},
        )

    def parse(self, response):
        # 'response' contains the page as seen by the browser
        yield {"url": response.url}

Page coroutines

A sorted iterable (list, tuple or dict, for instance) could be passed in the pyppeteer_page_coroutines Request.meta key to request coroutines to be awaited on the Page before returning the final Response to the callback.

This is useful when you need to perform certain actions on a page, like scrolling down or clicking links, and you want everything to count as a single Scrapy Response, containing the final result.

Supported actions

Receiving the Page object in the callback

Specifying pyppeteer.page.Page as the type for a callback argument will result in the corresponding Page object being injected in the callback. In order to able to await coroutines on the provided Page object, the callback needs to be defined as a coroutine function (async def).

import scrapy
import pyppeteer

class AwesomeSpiderWithPage(scrapy.Spider):
    name = "page"

    def start_requests(self):
        yield scrapy.Request("https://example.org", meta={"pyppeteer": True})

    async def parse(self, response, page: pyppeteer.page.Page):
        title = await page.title()  # "Example Domain"
        yield {"title": title}
        await page.close()

Notes:

Examples

Click on a link, save the resulting page as PDF

import scrapy
from scrapy_pyppeteer.page import PageCoroutine, NavigationPageCoroutine

class ClickAndSavePdfSpider(scrapy.Spider):
    name = "pdf"

    def start_requests(self):
        yield scrapy.Request(
            url="https://example.org",
            meta=dict(
                pyppeteer=True,
                pyppeteer_page_coroutines={
                    "click": NavigationPageCoroutine("click", selector="a"),
                    "pdf": PageCoroutine("pdf", options={"path": "/tmp/file.pdf"}),
                },
            ),
        )

    def parse(self, response):
        pdf_bytes = response.meta["pyppeteer_page_coroutines"]["pdf"].result
        with open("iana.pdf", "wb") as fp:
            fp.write(pdf_bytes)
        yield {"url": response.url}  # response.url is "https://www.iana.org/domains/reserved"

Scroll down on an infinite scroll page, take a screenshot of the full page

import scrapy
import pyppeteer
from scrapy_pyppeteer.page import PageCoroutine

class ScrollSpider(scrapy.Spider):
    name = "scroll"

    def start_requests(self):
        yield scrapy.Request(
            url="http://quotes.toscrape.com/scroll",
            meta=dict(
                pyppeteer=True,
                pyppeteer_page_coroutines=[
                    PageCoroutine("waitForSelector", "div.quote"),
                    PageCoroutine("evaluate", "window.scrollBy(0, document.body.scrollHeight)"),
                    PageCoroutine("waitForSelector", "div.quote:nth-child(11)"),  # 10 per page
                    PageCoroutine("screenshot", options={"path": "quotes.png", "fullPage": True}),
                ],
            ),
        )

    def parse(self, response):
        return {"quote_count": len(response.css("div.quote"))}

Acknowledgements

This project was inspired by: