Monitor the stdout and ensure that the crawl proceeds without raising any errors. Pay attention to the final status report from scrapy.
Inspect test_output.csv to ensure the data looks valid. I suggest opening a few of the URLs under the source column of test_output.csv and comparing the data for the row with what you see on the page.
Are there any smells or added technical debt to note?
To see the specific tasks where the Asana app for GitHub is being used, see below:
What's this PR do?
Fixes our Cuyahoga Soil and Water Conservation District spider (aka.
cuya_soil_water_conservation
).Why are we doing this?
The spider broke due to changes on the pages it's targeting. The changes in this PR ensure the scraper runs without error.
Steps to manually test
After installing the project using
pipenv
:Activate the virtual environment:
Run the spider:
Monitor the stdout and ensure that the crawl proceeds without raising any errors. Pay attention to the final status report from scrapy.
Inspect
test_output.csv
to ensure the data looks valid. I suggest opening a few of the URLs under the source column of test_output.csv and comparing the data for the row with what you see on the page.Are there any smells or added technical debt to note?