kraemer-lab / DART-Pipeline

Data analysis pipeline for the Dengue Advanced Readiness Tools (DART) project
https://dart-pipeline.readthedocs.io
MIT License
2 stars 0 forks source link

44 adapt existing relative wealth index rwi code into main pipeine #89

Closed rowannicholls closed 4 months ago

rowannicholls commented 4 months ago

Functions relevant to this ticket:

DART-Pipeline
 ├ A Collate Data
 │  ├ collate_data.py
 │  │  ├ ✅ download_economic_data()
 │  │  │  └ ✅ download_relative_wealth_index_data()
 │  │  └ ✅ download_socio_demographic_data()
 │  │     └ ✅ download_meta_pop_density_data()
 │  └ test_collate_data.py
 │     ├ ✅ test_download_economic_data()
 │     ├ ✅ test_download_relative_wealth_index_data()
 │     ├ ✅ test_download_socio_demographic_data()
 │     └ ✅ test_download_meta_pop_density_data()
 └ B Process Data
    ├ process_data.py
    │  ├ ✅ process_economic_data()
    │  │  └ ✅ process_relative_wealth_index_data()
    │  ├ ✅ process_socio_demographic_data()
    │  │  └ ✅ process_meta_pop_density_data()
    │  └ ✅ process_economic_geospatial_sociodemographic_data()
    │     └ ✅ process_pop_weighted_relative_wealth_index_data()
    └ test_process_data.py
       ├ ✅ test_process_economic_data()
       ├ ✅ test_process_relative_wealth_index_data()
       ├ ✅ test_process_socio_demographic_data()
       ├ ✅ test_process_meta_pop_density_data()
       ├ ✅ test_process_economic_geospatial_sociodemographic_data()
       └ ✅ test_process_pop_weighted_relative_wealth_index_data()
jsbrittain commented 4 months ago

@rowannicholls a quick note to say that the tests appear to be failing because the runner has run out of disk space (https://github.com/kraemer-lab/DART-Pipeline/actions/runs/9977049468?pr=89). accordings to the specs, standard runners have 14Gb of storage available. you may need to tidy-up the files, or test collation and aggregation for one dataset at a time.

rowannicholls commented 4 months ago

@jsbrittain thanks for catching that, I'll take a look

rowannicholls commented 4 months ago

@jsbrittain the culprit seems to have been a population density map from Meta's Data for Good which was 13.3 GB. Have fixed.