Closed timothy-glover closed 8 months ago
On the slow tests, we currently have something similar with test for remote data where they are skipped by default.
How about using something like
pytest-skip-slow
to mark the slow tests? I think in the CI we should have the slow tests enabled, but makes sense you may be default want to skip them locally when not working on anything related.
That solution seems to be ideal. The package is not installed by default however so is there a procedure for incorporating this as a required package?
This PR is to introduce the Kullback-Leibler Divergence (KLD) measure and reward function for calculating the relative entropy between two distributions. This implementation uses the discrete formulation for KLD and is currently only compatible with particle state types. The mathematical form of the equation is provided in the measure,
KLDivergence
. The reward function,ExpectedKLDivergence
, implements the KLD measure in order to decide which candidate sensing (or movement) action leads to the most informative measurement based on the provided predictor and updater. A second KLD based reward function,MultiUpdateExpectedKLDivergence
, which generates expected detections by resampling from the predicted particle state to create a subsample of possible measurements based on the target distribution. The KLD is then calculated based on each of the measurements. This comes at a computational burden. I am open to name suggestions if the general consensus on these class names is that these are confusing.Tests for each new class have been introduced or incorporated into existing test frameworks. The
MultiUpdateExpectedKLDivergence
results in a significant increate in test time for the sensor management module. The sensor management test takes around 8 minutes 30 seconds on my fairly standard PC and takes total testing time to 9 minutes 24 seconds.