Closed kkappler closed 2 years ago
Merging #226 (18755f9) into main (3a45e3a) will decrease coverage by
0.27%
. The diff coverage is80.48%
.
@@ Coverage Diff @@
## main #226 +/- ##
==========================================
- Coverage 78.03% 77.75% -0.28%
==========================================
Files 101 101
Lines 5432 5454 +22
==========================================
+ Hits 4239 4241 +2
- Misses 1193 1213 +20
Impacted Files | Coverage Δ | |
---|---|---|
aurora/config/metadata/decimation.py | 100.00% <ø> (+8.33%) |
:arrow_up: |
aurora/pipelines/time_series_helpers.py | 73.88% <48.00%> (-13.01%) |
:arrow_down: |
tests/time_series/test_windowing_scheme.py | 90.17% <75.00%> (-1.17%) |
:arrow_down: |
aurora/pipelines/process_mth5.py | 97.60% <91.66%> (-0.36%) |
:arrow_down: |
aurora/pipelines/run_summary.py | 88.88% <100.00%> (+0.31%) |
:arrow_up: |
aurora/transfer_function/kernel_dataset.py | 80.00% <100.00%> (+3.12%) |
:arrow_up: |
tests/parkfield/test_process_parkfield_run_rr.py | 94.44% <100.00%> (ø) |
|
tests/synthetic/test_stft_methods_agree.py | 95.23% <100.00%> (+0.11%) |
:arrow_up: |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
A minor decrease in code cov owes to new methods of resample that have been developed and bench tested but are not yet in the testing framework.
Due to the approaching IRIS MT Short Course, besides any bugs that need to be fixed, this PR merge will be the workshop branch and will be either released or at least tagged next Friday 14 October
[x] test on v3.9
[x] Review the populating of dataset_df
[] Make a version of decimation that doesn't drop out and back into xarray NO: see issue #227, xarray does not seem to support this properly yet.
However, I did create two pure xarray implementations of the decimation. One using coarsen() and one using resample(), if for no other reason than to show the syntax. Running these on the synthetic data produced reasonable results but each decimated sample is basically constructed from the mean of the decimation_factor number of samples around the time point. This is a very weak form of AAF, and I don't trust it yet. The resample command is in general very slow, whereas coarsen seems pretty fast, and they both basically do the same thing. These functions are called
prototype_decimate_2
andprototype_decimate_3
, and are in time_series_helpers.py.[x] Move sample_rate updating into the run_ts xarray object metadata dictionary (rather than modifying in-place the run_object)