gtfs-hub, as deployed at https://gtfs.mfdz.de, provides great value to me when I quickly want to find out where a certain feed can be downloaded, when it has been updated, and if there are validation problems.
These benefits could be extended to automated processes if the list of feeds was available in a machine-readable format, e.g. CSV, newline-delimited JSON or JSON.
On the other hand, third-party data infrastructure run by a "random" person on the internet (you) would become even more entrenched in the German transit data landscape.
I'd be open for a PR. I think that generating JSON-LD companion-files for every dataset and a sitemap file, and register it with google dataset search.
Or do you have a different suggestion?
gtfs-hub, as deployed at https://gtfs.mfdz.de, provides great value to me when I quickly want to find out where a certain feed can be downloaded, when it has been updated, and if there are validation problems.
These benefits could be extended to automated processes if the list of feeds was available in a machine-readable format, e.g. CSV, newline-delimited JSON or JSON.
On the other hand, third-party data infrastructure run by a "random" person on the internet (you) would become even more entrenched in the German transit data landscape.
What do you think?