[ ] Make crawl launches 'back-fillable` so we can re-run launches if the don't happen:
Needs date-stamped crawl feed files.
Needs a separate task that is dependent on the data export, or the current w3act_export needs to be made back-fillable.
[ ] Blocks, seeds and scope files in use by the crawlers need to be updated:
Blocks and scope managed via Watched Files, less clear if/how seeds should be blanket-updated.
Not clear how best to do that. Probably push rather than pull, as this means Airflow is always in charge of things. But then, a shared volume updated directly by Airflow? Files made available and remote task or service prompted to pull them down?
[ ] Launch metrics need to be posted to Prometheus.
Building on #83, improve crawl management: