sp

scrapy-puppeteer

Scrapy + Puppeteer

Showing:

Popularity

Downloads/wk

0

GitHub Stars

105

Maintenance

Last Commit

2yrs ago

Contributors

2

Package

Dependencies

2

License

Categories

Readme

Scrapy with Puppeteer

PyPI Build Status Test Coverage Maintainability

Scrapy middleware to handle javascript pages using puppeteer.

⚠ IN ACTIVE DEVELOPMENT - READ BEFORE USING ⚠

This is an attempt to make Scrapy and Puppeteer work together to handle Javascript-rendered pages. The design is strongly inspired of the Scrapy Splash plugin.

Scrapy and Puppeteer

The main issue when running Scrapy and Puppeteer together is that Scrapy is using Twisted and that Pyppeteeer (the python port of puppeteer we are using) is using asyncio for async stuff.

Luckily, we can use the Twisted's asyncio reactor to make the two talking with each other.

That's why you cannot use the buit-in scrapy command line (installing the default reactor), you will have to use the scrapyp one, provided by this module.

If you are running your spiders from a script, you will have to make sure you install the asyncio reactor before importing scrapy or doing anything else:

import asyncio
from twisted.internet import asyncioreactor

asyncioreactor.install(asyncio.get_event_loop())

Installation

$ pip install scrapy-puppeteer

Configuration

Add the PuppeteerMiddleware to the downloader middlewares:

DOWNLOADER_MIDDLEWARES = {
    'scrapy_puppeteer.PuppeteerMiddleware': 800
}

Usage

Use the scrapy_puppeteer.PuppeteerRequest instead of the Scrapy built-in Request like below:

from scrapy_puppeteer import PuppeteerRequest

def your_parse_method(self, response):
    # Your code...
    yield PuppeteerRequest('http://httpbin.org', self.parse_result)

The request will be then handled by puppeteer.

The selector response attribute work as usual (but contains the html processed by puppeteer).

def parse_result(self, response):
    print(response.selector.xpath('//title/@text'))

Additional arguments

The scrapy_puppeteer.PuppeteerRequest accept 2 additional arguments:

wait_until

Will be passed to the waitUntil parameter of puppeteer. Default to domcontentloaded.

wait_for

Will be passed to the waitFor to puppeteer.

screenshot

When used, puppeteer will take a screenshot of the page and the binary data of the .png captured will be added to the response meta:

yield PuppeteerRequest(
    url,
    self.parse_result,
    screenshot=True
)

def parse_result(self, response):
    with open('image.png', 'wb') as image_file:
        image_file.write(response.meta['screenshot'])

Rate & Review

Great Documentation0
Easy to Use0
Performant0
Highly Customizable0
Bleeding Edge0
Responsive Maintainers0
Poor Documentation0
Hard to Use0
Slow0
Buggy0
Abandoned0
Unwelcoming Community0
100
No reviews found
Be the first to rate

Alternatives

No alternatives found

Tutorials

No tutorials found
Add a tutorial