scrapy-plugins / scrapy-playwright

🎭 Playwright integration for Scrapy
BSD 3-Clause "New" or "Revised" License
1.03k stars 112 forks source link

AttributeError: 'NoneType' object has no attribute 'headers' #10

Closed michaelvsinko closed 2 years ago

michaelvsinko commented 3 years ago

The error exists at arbitrary times. Maybe after viewing 500 pages, or on the very first one.

Example to reproduce

import re

from scrapy import Spider, Request
from scrapy.crawler import CrawlerProcess
from scrapy.selector import Selector
from scrapy.utils.project import get_project_settings

class Spider1(Spider):
  name = "spider1"
  custom_settings = {
    "FEEDS": {
      "members.json": {
        "format": "jsonlines",
        "encoding": "utf-8",
        "store_empty": False,
      },
    }
  }

  URL_MEMBERS_POSTFIX = "?act=members&offset=0"
  MEMBERS_LIST = '//div[@id="mcont"]/descendant::div[has-class("upanel")]/a[has-class("inline_item")]'
  MEMBER_NAME = './div[has-class("ii_body")]/span[has-class("ii_owner")]/text()'
  NEXT_URL = '//div[@id="mcont"]/descendant::div[has-class("upanel")]/div[has-class("pagination")]/a[has-class("pg_link_sel")]/following-sibling::a[has-class("pg_link")]/@href'

  def start_requests(self):
    for url in self.start_urls:
      yield Request(url=url + self.URL_MEMBERS_POSTFIX, meta={"playwright": True})

  def parse(self, response):
    selector = Selector(response)

    members = selector.xpath(self.MEMBERS_LIST)
    for member in members:
      member_name = member.xpath(self.MEMBER_NAME).get()

      yield {"member_name": member_name}

    next_url = selector.xpath(self.NEXT_URL).get()
    if next_url:
      next_offset = re.findall(r"offset=[0-9]*", next_url)[0]
      next_url = re.sub(r"offset=[0-9]*", next_offset, response.url)

      yield Request(url=next_url, meta={"playwright": True}, callback=self.parse)

if __name__ == "__main__":
  settings = get_project_settings()
  process = CrawlerProcess(settings=settings)
  process.crawl(Spider1, start_urls=["https://m.vk.com/vkmusicians"])
  #process.crawl(Spider1, start_urls=["https://m.vk.com/tumblr_perf"])
  process.start()

Settings

BOT_NAME = 'crawlers'

SPIDER_MODULES = ['crawlers.spiders']
NEWSPIDER_MODULE = 'crawlers.spiders'

ROBOTSTXT_OBEY = False

DOWNLOAD_DELAY = 3

CONCURRENT_REQUESTS = 1
CONCURRENT_REQUESTS_PER_DOMAIN = 1
CONCURRENT_REQUESTS_PER_IP = 1

COOKIES_ENABLED = False

TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"

DOWNLOAD_HANDLERS = {
    "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
    #"http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
}

DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
    'scrapy.downloadermiddlewares.retry.RetryMiddleware': None,
    'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,
    'scrapy_fake_useragent.middleware.RetryUserAgentMiddleware': 401,
}

FAKEUSERAGENT_PROVIDERS = [
    'scrapy_fake_useragent.providers.FakeUserAgentProvider',
    'scrapy_fake_useragent.providers.FakerProvider',
    'scrapy_fake_useragent.providers.FixedUserAgentProvider',
]
FAKEUSERAGENT_FALLBACK = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Brave Chrome/89.0.4389.105 Safari/537.36'

PLAYWRIGHT_BROWSER_TYPE = "chromium"
FAKE_USERAGENT_RANDOM_UA_TYPE = "chrome"
FAKER_RANDOM_UA_TYPE = "chrome"

Error

Traceback (most recent call last):
  File "/Users/user/develop/work/crawlers/.venv/lib/python3.8/site-packages/twisted/internet/defer.py", line 1443, in _inlineCallbacks
    result = current_context.run(result.throwExceptionIntoGenerator, g)
  File "/Users/user/develop/work/crawlers/.venv/lib/python3.8/site-packages/twisted/python/failure.py", line 500, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/Users/user/develop/work/crawlers/.venv/lib/python3.8/site-packages/scrapy/core/downloader/middleware.py", line 44, in process_request
    return (yield download_func(request=request, spider=spider))
  File "/Users/user/develop/work/crawlers/.venv/lib/python3.8/site-packages/twisted/internet/defer.py", line 837, in adapt
    extracted = result.result()
  File "/Users/user/develop/work/crawlers/.venv/lib/python3.8/site-packages/scrapy_playwright/handler.py", line 140, in _download_request
    result = await self._download_request_with_page(request, spider, page)
  File "/Users/user/develop/work/crawlers/.venv/lib/python3.8/site-packages/scrapy_playwright/handler.py", line 180, in _download_request_with_page
    headers = Headers(response.headers)
AttributeError: 'NoneType' object has no attribute 'headers'

Using

michaelvsinko commented 3 years ago

I think the problem with scrapy-fake-useragent, because works fine without it, but the author does not comments issues since December

elacuesta commented 3 years ago

Hi, thanks for the report. From a quick look, I think it has to do with the following from the upstream docs for Page.goto:

page.goto either throws an error or returns a main resource response. The only exceptions are navigation to about:blank or navigation to the same URL with a different hash, which would succeed and return null.

(Note that it says null instead of None: it's a translation of the docs for the original JS version).

I'm able to reproduce consistently with the following:

import scrapy
from scrapy.crawler import CrawlerProcess

class HeadersSpider(scrapy.Spider):
    name = "headers"

    def start_requests(self):
        yield scrapy.Request(
            url="http://example.org#1",
            meta={"playwright": True, "playwright_include_page": True},
        )

    def parse(self, response):
        return scrapy.Request(
            url="http://example.org#2",
            meta={"playwright": True, "playwright_page": response.meta["playwright_page"]},
            dont_filter=True,
        )

if __name__ == "__main__":
    process = CrawlerProcess(
        settings={
            "TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor",
            "DOWNLOAD_HANDLERS": {
                "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
            },
        }
    )
    process.crawl(HeadersSpider)
    process.start()
$ python examples/headers.py 
(...)
2021-04-10 20:00:57 [scrapy.core.scraper] ERROR: Error downloading <GET http://example.org#2>
Traceback (most recent call last):
  File "/Users/eus/zyte/scrapy-playwright/venv-scrapy-playwright/lib/python3.8/site-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "/Users/eus/zyte/scrapy-playwright/venv-scrapy-playwright/lib/python3.8/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/Users/eus/zyte/scrapy-playwright/venv-scrapy-playwright/lib/python3.8/site-packages/scrapy/core/downloader/middleware.py", line 45, in process_request
    return (yield download_func(request=request, spider=spider))
  File "/Users/eus/zyte/scrapy-playwright/venv-scrapy-playwright/lib/python3.8/site-packages/twisted/internet/defer.py", line 824, in adapt
    extracted = result.result()
  File "/Users/eus/zyte/scrapy-playwright/scrapy_playwright/handler.py", line 140, in _download_request
    result = await self._download_request_with_page(request, spider, page)
  File "/Users/eus/zyte/scrapy-playwright/scrapy_playwright/handler.py", line 180, in _download_request_with_page
    headers = Headers(response.headers)
AttributeError: 'NoneType' object has no attribute 'headers'
(...)

I'm not entirely sure why the error occurs in your case though, you don't seem to be setting dont_filter=True, using a custom dupefilter or reusing a playwright page.

michaelvsinko commented 3 years ago

Thanks for the answer. But I still don't understand why this is happening, because some pages are crawled fine.

With/out dont_filter=True works the same. Also I tried to reuse playwright page but still getting same error.

I turned off the headless mode and was watching the process. It looks like the same with/out error. My real crawler includes a PageCoroutine that waits for the elements on the page to load by waiting for a special class on the loading bar element. The page loads fine and the coroutine is waiting to load. The error occurs after the target class appears. So target page exists and elements on it too. I need functional solution right now, so I fixed the _download_request_with_page function but this is obviously not a solution.

...
headers = None
status = 200
if response:
    headers = Headers(response.headers)
    headers.pop("Content-Encoding", None)
    status = response.status
respcls = responsetypes.from_args(headers=headers, url=page.url, body=body)

return respcls(
    url=page.url,
    status=status,
    headers=headers,
    body=body,
    request=request,
    flags=["playwright"],
)