ecoron / SerpScrap

SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It's usefull for SEO and business related research tasks.
https://github.com/ecoron/SerpScrap
MIT License
257 stars 61 forks source link
research scraper scraping screenshot search seo

========= SerpScrap

.. image:: https://img.shields.io/pypi/v/SerpScrap.svg :target: https://pypi.python.org/pypi/SerpScrap

.. image:: https://readthedocs.org/projects/serpscrap/badge/?version=latest :target: http://serpscrap.readthedocs.io/en/latest/ :alt: Documentation Status

.. image:: https://travis-ci.org/ecoron/SerpScrap.svg?branch=master :target: https://travis-ci.org/ecoron/SerpScrap

.. image:: https://img.shields.io/docker/pulls/ecoron/serpscrap.svg :target: https://hub.docker.com/r/ecoron/serpscrap

SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It's usefull for SEO and business related research tasks.

Extract these result types

For each result of a resultspage get

Also get a screenshot of each result page. You can also scrape the text content of each result url. It is also possible to save the results as CSV for future analytics. If required you can also use your own proxylist.

Ressources

See http://serpscrap.readthedocs.io/en/latest/ for documentation.

Source is available at https://github.com/ecoron/SerpScrap

Install

The easy way to do:

.. code-block:: python

pip uninstall SerpScrap -y pip install SerpScrap --upgrade

More details in the install_ section of the documentation.

Usage

SerpScrap in your applications

.. code-block:: python

!/usr/bin/python3

-- coding: utf-8 --

import pprint import serpscrap

keywords = ['example']

config = serpscrap.Config() config.set('scrape_urls', False)

scrap = serpscrap.SerpScrap() scrap.init(config=config.get(), keywords=keywords) results = scrap.run()

for result in results: pprint.pprint(result)

More detailes in the examples_ section of the documentation.

To avoid encode/decode issues use this command before you start using SerpScrap in your cli.

.. code-block:: bash

chcp 65001 set PYTHONIOENCODING=utf-8

.. image:: https://raw.githubusercontent.com/ecoron/SerpScrap/master/docs/logo.png :target: https://github.com/ecoron/SerpScrap

Supported OS

Changes

Notes about major changes between releases

0.13.0

0.12.0

I recommend an update to the latest version of SerpScrap, because the searchengine has updated the markup of search result pages(serp)

0.11.0

0.10.0

0.9.0

0.8.0

References

SerpScrap is using Chrome headless and lxml to scrape serp results. For raw text contents of fetched URL's, it is using beautifulsoup4 . SerpScrap also supports PhantomJs ,which is deprecated, a scriptable headless WebKit, which is installed automaticly on the first run (Linux, Windows). The scrapcore was based on GoogleScraper_ , an outdated project, and has many changes and improvemts.

.. target-notes::

.. install: http://serpscrap.readthedocs.io/en/latest/install.html .. examples: http://serpscrap.readthedocs.io/en/latest/examples.html .. Chrome headless: http://chromedriver.chromium.org/ .. lxml: https://lxml.de/ .. beautifulsoup4: https://www.crummy.com/software/BeautifulSoup/ .. PhantomJs: https://github.com/ariya/phantomjs .. _GoogleScraper: https://github.com/NikolaiT/GoogleScraper