Fixes our Cuyahoga County Investment Advisory Committee spider (aka. cuya_investment_advisory_committee), which broke due to URL and page structure changes.
Why are we doing this?
We want working scrapers, of course 🤖 The changes in this PR include XXX.
Steps to manually test
After installing the project using pipenv (see Readme):
Monitor the stdout and ensure that the crawl proceeds without raising any errors. Pay attention to the final status report from scrapy.
Inspect test_output.csv to ensure the data looks valid. I suggest opening a few of the URLs under the source column of test_output.csv and comparing the data for the row with what you see on the page.
Are there any smells or added technical debt to note?
What's this PR do?
Fixes our Cuyahoga County Investment Advisory Committee spider (aka.
cuya_investment_advisory_committee
), which broke due to URL and page structure changes.Why are we doing this?
We want working scrapers, of course 🤖 The changes in this PR include XXX.
Steps to manually test
After installing the project using
pipenv
(see Readme):Activate the virtual environment:
Run the spider:
Monitor the stdout and ensure that the crawl proceeds without raising any errors. Pay attention to the final status report from scrapy.
Inspect
test_output.csv
to ensure the data looks valid. I suggest opening a few of the URLs under the source column of test_output.csv and comparing the data for the row with what you see on the page.Are there any smells or added technical debt to note?