add the product_level key to the product.yaml file --> Remove ifelse logic across the code and in tests units
add available versions to product.yaml
add product start_time and end_time in product.yaml
In conftest, create real example for each product (product info, src and dst path saved into YAML file)
--> Use find_pps_filepaths and get_filepath_info to generate YAML file
In test_checks.py, use real local directory file structure
Allow pd.Timestamp in check_time
test_directories/test_get_pps_directory ... expected path in YAML (to track changes over time)
I think we could create a GPM base_dir in the test directory (or create a temporary directory when executing the test)
with the expected structure <tests/data/GPM>///////,
then add some dummy files (with realistic names) and test that find_filepaths returns their list.
Alternatively (to be discussed with @sphamba), we could create such a directory in test/data/raw/GPM, write there the test HDF files to test the data reading ... and then have a test/data/expected/GPM directory with the expected gpm_api xr.Dataset (saved in netCDF or Zarr format).
add the product_level key to the product.yaml file --> Remove ifelse logic across the code and in tests units
add available versions to product.yaml
add product start_time and end_time in product.yaml
In
conftest
, create real example for each product (product info, src and dst path saved into YAML file) --> Use find_pps_filepaths and get_filepath_info to generate YAML fileIn
test_checks.py
, use real local directory file structureAllow
pd.Timestamp
incheck_time
test_directories/test_get_pps_directory
... expected path in YAML (to track changes over time)test_data_integrity
with corrupted fileremove isempty usage (i.e. True --> False, np.array(False), [()], [[]], [None] --> True, np.bool --> False )
Test
find_filepaths
intest_disk.py
.