This is a top-level file in the repo (a markdown or CSV are probably most readable) which we update automatically with GitHub Actions. It's to help people answer the question, "what scrapers are in this repo, anyway?" without looking through all the folders.
Eventually, we may outgrow this single-file directory or need fancier tools. For now, this should be fine.
What's in the index
A row for each scraper
We could populate this from Data Sources which have a scraper_url
These properties
scraper_url
agency_described
jurisdiction (state, county, municipality)
record_type
We can link to a more detailed public Airtable / DB view if they want to do a more specific search.
Consider: one group for "in this repo" and another group for "not in this repo"
The idea
This is a top-level file in the repo (a markdown or CSV are probably most readable) which we update automatically with GitHub Actions. It's to help people answer the question, "what scrapers are in this repo, anyway?" without looking through all the folders.
Eventually, we may outgrow this single-file directory or need fancier tools. For now, this should be fine.
What's in the index
scraper
We could populate this from Data Sources which have ascraper_url
These properties
We can link to a more detailed public Airtable / DB view if they want to do a more specific search.
Consider: one group for "in this repo" and another group for "not in this repo"