Closed michaelvsinko closed 2 years ago
I think the problem with scrapy-fake-useragent
, because works fine without it, but the author does not comments issues since December
Hi, thanks for the report. From a quick look, I think it has to do with the following from the upstream docs for Page.goto
:
page.goto either throws an error or returns a main resource response. The only exceptions are navigation to about:blank or navigation to the same URL with a different hash, which would succeed and return null.
(Note that it says null
instead of None
: it's a translation of the docs for the original JS version).
I'm able to reproduce consistently with the following:
import scrapy
from scrapy.crawler import CrawlerProcess
class HeadersSpider(scrapy.Spider):
name = "headers"
def start_requests(self):
yield scrapy.Request(
url="http://example.org#1",
meta={"playwright": True, "playwright_include_page": True},
)
def parse(self, response):
return scrapy.Request(
url="http://example.org#2",
meta={"playwright": True, "playwright_page": response.meta["playwright_page"]},
dont_filter=True,
)
if __name__ == "__main__":
process = CrawlerProcess(
settings={
"TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor",
"DOWNLOAD_HANDLERS": {
"http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
},
}
)
process.crawl(HeadersSpider)
process.start()
$ python examples/headers.py
(...)
2021-04-10 20:00:57 [scrapy.core.scraper] ERROR: Error downloading <GET http://example.org#2>
Traceback (most recent call last):
File "/Users/eus/zyte/scrapy-playwright/venv-scrapy-playwright/lib/python3.8/site-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/Users/eus/zyte/scrapy-playwright/venv-scrapy-playwright/lib/python3.8/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/Users/eus/zyte/scrapy-playwright/venv-scrapy-playwright/lib/python3.8/site-packages/scrapy/core/downloader/middleware.py", line 45, in process_request
return (yield download_func(request=request, spider=spider))
File "/Users/eus/zyte/scrapy-playwright/venv-scrapy-playwright/lib/python3.8/site-packages/twisted/internet/defer.py", line 824, in adapt
extracted = result.result()
File "/Users/eus/zyte/scrapy-playwright/scrapy_playwright/handler.py", line 140, in _download_request
result = await self._download_request_with_page(request, spider, page)
File "/Users/eus/zyte/scrapy-playwright/scrapy_playwright/handler.py", line 180, in _download_request_with_page
headers = Headers(response.headers)
AttributeError: 'NoneType' object has no attribute 'headers'
(...)
I'm not entirely sure why the error occurs in your case though, you don't seem to be setting dont_filter=True
, using a custom dupefilter or reusing a playwright page.
Thanks for the answer. But I still don't understand why this is happening, because some pages are crawled fine.
With/out dont_filter=True works the same. Also I tried to reuse playwright page but still getting same error.
I turned off the headless mode and was watching the process. It looks like the same with/out error. My real crawler includes a PageCoroutine that waits for the elements on the page to load by waiting for a special class on the loading bar element. The page loads fine and the coroutine is waiting to load. The error occurs after the target class appears. So target page exists and elements on it too. I need functional solution right now, so I fixed the _download_request_with_page function but this is obviously not a solution.
...
headers = None
status = 200
if response:
headers = Headers(response.headers)
headers.pop("Content-Encoding", None)
status = response.status
respcls = responsetypes.from_args(headers=headers, url=page.url, body=body)
return respcls(
url=page.url,
status=status,
headers=headers,
body=body,
request=request,
flags=["playwright"],
)
The error exists at arbitrary times. Maybe after viewing 500 pages, or on the very first one.
Example to reproduce
Settings
Error
Using