Closed MattTriano closed 1 year ago
I've started working on this. So far I've implemented functionality to identify and delete data files that are identical to an earlier pull.
I've structured things so that some functionality can be reused in the next step, where I'll implement a DAG to delete some non-duplicated data, although I haven't settled on retention logic yet. Maybe it will be a keep_last_n_data_versions
or maybe it would be better to have a keep_data_versions_from_past_n_days
, or maybe both. I'll think through cases.
After reviewing the sizes of XComs stored in the airflow_metadata_db and of logs in all non-scheduler
logs directories, I see that the contents of the /logs/scheduler
dir comprise 94% of the /logs
disk usage. Upon inspecting a few scheduler log files, I see that the issue is there are ~25MB of logs per DAG per day, and it's overwhelmingly driven by this unnecessary warning that's slated to be removed in Airflow v2.5.2 (we're at v2.5.1 right now). So I'll settle for just clearing out old scheduler records right now.
If the ingestion is reliable enough, it might even be feasible to include cleanup of the downloaded file after ingestion to the DAG.