-
I created a new folder for my latest webscraper, and created an (as yet unfinished) spider file and spider test file, and also updated to the most recent version of city-scrapers-pitt/master branch an…
-
## Background
We've transitioned from Python to R and R Markdown for data analysis, however it looks like there's a new player on the block that supports Python, R, and Observable (JavaScript) out …
-
the url for its meetings has changed:
The new URL is https://www.adamhscc.org/about-us/board-members/board-meetings
The scraper is at: https://github.com/City-Bureau/city-scrapers-cle/blob/main…
-
Documenters page: https://www.documenters.org/agencies/cuyahoga-county-arts-culture-193/
scraper is currently at https://github.com/City-Bureau/city-scrapers-cle/blob/main/city_scrapers/spiders/cuy…
-
see also: https://github.com/City-Bureau/city-scrapers
-
### Problem
There are sites which publish local events on them, but we want them to centralized in one location for easy search and pooled into a DB of other events people publish.
### Solution
C…
-
Just opening this up to start the conversation
My initial thoughts are something like this:
--doc/
----metrics-specs/
--dashboard/
----frontend stuff, etc.
--scrapers/
----some_script_to_ru…
-
> **Description:**
The location(city) of each job displayed on scrapers.peviitor.ro matches the job location from the company's website.
> **Preconditions**:
The [url](https://scrapers.peviitor.ro/sr…
-
This issue has been **automatically** created by PlanningAlerts for [Banyule City Council](https://www.planningalerts.org.au/authorities/banyule)
It uses the scraper [planningalerts-scrapers/banyule]…
-
This issue has been **automatically** created by PlanningAlerts. Only close this issue once the authority is working again on PlanningAlerts.