Open aewallin opened 5 years ago
Anders, I have some Matlab/Scilab code for several things discussed here:
I would welcome a conda package. Actually, we already build it from the PiPy package for our PTB-internal conda repo.
Thanks for the comments.
if you can, please test the conda package and report any issues: https://anaconda.org/conda-forge/allantools
tested installation of the conda package, works. No issues found so far, thanks!
Hi @aewallin !
I have been working in the past couple of weeks on an updated version of allantools
, with some of the features which were missing so far. It's still under development, but it should generally work as expected. I will keep working on it in the upcoming weeks and add new features, but I wanted to know what you thought of the concept first :)
I don't really use GitHub anymore, so it's unofficially forked on GitLab: https://gitlab.com/amv213/allantoolkit When it reaches a mature enough stage of development I would be more than happy to merge it here.
As far as I have tested, the code exactly replicates Stable32 results (v1.61). The only difference is on the confidence intervals: for some reason the Chi-squared confidence intervals calculated by Stable32 are slightly different than what I get with Scipy's chi-squared function ... even if the equivalent degrees of freedom I get are the same ...
Don't hesitate to let me know what you think!
Changelog:
Next steps (?):
Merry Christmas to all!
Wow, that seems like a lot of work - good job!
I would encourage you to submit pull-requests in smaller pieces that allow myself or others to review code/tests/results in much more manageable pieces.
I don't see a lot of discussion on the low level API in the issues or the mailing list. I suggest keeping the low-level API and perhaps adding a higher level one, if needed.
How large differences in confidence intervals to Stable32 do you see? If the differences are in the 2nd or more digit, this could be because an approximating function to the (inverse) cumulative chi-squared distribution. I can try to dig out the approximation used in Stable32 - if you are interested in exploring this further?
Perfect, thanks!
Yes, the confidence interval differences I'm seeing are generally at the 3rd decimal digit for most deviation types. If you have time to find the approximation that Stable32 uses that would be amazing!! I'm more than happy to then try see if we can match results completely.
The low-level API change was just to handle the increased amount of dev outputs, because for every stability run I am now also returning averaging factors, identified noise type, and lower and higher confidence intervals. Now you could just do out = adev(...)
and then get the individual fields with out.taus, out.devs, out.alphas, out.devs_lo, ...
instead of having to unpack the tuple from the start and having to remember in which order everything comes out.
Of course, this is all quite flexible. Once the code is all ready I will certainly get in touch again to get help on how to best go about integrating changes, and to see which changes you would like and which are not necessary :)
@amv213 : my current understanding of how Stable32 approximates the inverse chi-squared cumulative distribution and how this leads to differences in computed confidence intervals compared to allantools is now posted in my blog: http://www.anderswallin.net/2020/12/fun-with-chi-squared/ There is a link to some preliminary code at the end.
It probably makes sense to split the confidence-interval discussion into a separate issue - if others feel that comparison of confidence intervals with Stable32 is useful now and in the future for the allantools codebase.
Groslambert covariance now (2021 June) included. Documentation and tests to be added.
this workflow should now run test coverage analysis and upload the results to coveralls: https://github.com/aewallin/allantools/actions/workflows/coverage.yml
This is a list of to-do features/issues that will not make it into 2019.07
Algorithms:
Test and Build:
Dependencies: