Crawlall.py runs smoothly at first, but at some point the chrome window seems to be stuck at loading and the script just seems to wait for ever. The only way to unstuck the script is to click on the Chrome's "X" button ("Stop loading this page") and then the following error is outputted (and fortunately, the crawling continues):
Traceback (most recent call last):
File "/home/bomberb17/make-og-pixel-great-again/crawlall.py", line 50, in <module>
driver.get(row.productUrl)
File "/home/bomberb17/.local/lib/python3.10/site-packages/undetected_chromedriver/__init__.py", line 665, in get
return super().get(url)
File "/home/bomberb17/.local/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 354, in get
self.execute(Command.GET, {"url": url})
File "/home/bomberb17/.local/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 345, in execute
self.error_handler.check_response(response)
File "/home/bomberb17/.local/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py", line 229, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message: timeout: Timed out receiving message from renderer: -48.625
(Session info: chrome=120.0.6099.216)
Stacktrace:
#0 0x564daca70f83 <unknown>
#1 0x564dac729cf7 <unknown>
#2 0x564dac70f7a1 <unknown>
#3 0x564dac70f44a <unknown>
#4 0x564dac70d7e1 <unknown>
#5 0x564dac70e18a <unknown>
#6 0x564dac71f07c <unknown>
#7 0x564dac7377c1 <unknown>
#8 0x564dac73d6bb <unknown>
#9 0x564dac70e92d <unknown>
#10 0x564dac7375c2 <unknown>
#11 0x564dac7c2204 <unknown>
#12 0x564dac7a2e53 <unknown>
#13 0x564dac76add4 <unknown>
#14 0x564dac76c1de <unknown>
#15 0x564daca35531 <unknown>
#16 0x564daca39455 <unknown>
#17 0x564daca21f55 <unknown>
#18 0x564daca3a0ef <unknown>
#19 0x564daca0599f <unknown>
#20 0x564daca5e008 <unknown>
#21 0x564daca5e1d7 <unknown>
#22 0x564daca70124 <unknown>
#23 0x7f8830294ac3 <unknown>
Maybe there should be a way to timeout and continue without human intervention?
We also need a resume functionality (e.g. start from photos beginning year 2020 and behind).
Crawlall.py runs smoothly at first, but at some point the chrome window seems to be stuck at loading and the script just seems to wait for ever. The only way to unstuck the script is to click on the Chrome's "X" button ("Stop loading this page") and then the following error is outputted (and fortunately, the crawling continues):
Maybe there should be a way to timeout and continue without human intervention? We also need a resume functionality (e.g. start from photos beginning year 2020 and behind).