Closed maho3 closed 9 months ago
Attention: 10 lines
in your changes are missing coverage. Please review.
Comparison is base (
a11f4bb
) 97.30% compared to head (3dbc6f6
) 97.27%.
Files | Patch % | Lines |
---|---|---|
ili/dataloaders/loaders.py | 79.31% | 6 Missing :warning: |
ili/inference/runner_sbi.py | 96.00% | 2 Missing :warning: |
ili/inference/runner_pydelfi.py | 97.29% | 1 Missing :warning: |
ili/utils/ndes_pt.py | 95.45% | 1 Missing :warning: |
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Currently running tests on a duplicate of this branch (seq_test to use a more recent torch than the setup.cfg).
@CompiledAtBirth how did the tests do on the duplicate branch?
From @CompiledAtBirth on slack:
For the SBISimulator loader specifically, the simulator function has to output a torch.Tensor of torch.Size((1,n)). The case is easy to handle once the user knows his simulator function must return that.
To avoid this mistake, please can you make sure it is well documented in the code and in the tutorial? We might as well do it in this PR as it is a minor thing to add
From @CompiledAtBirth on slack:
For the SBISimulator loader specifically, the simulator function has to output a torch.Tensor of torch.Size((1,n)). The case is easy to handle once the user knows his simulator function must return that.
To avoid this mistake, please can you make sure it is well documented in the code and in the tutorial? We might as well do it in this PR as it is a minor thing to add
Added comment in the code and tutorial.
The numpy seed is updated in PlotSinglePosterior if x_obs is None and a seed is provided
This is rather difficult to test because we don't return what x_obs we choose, and the used np seed is difficult to access. I think this test isn't necessary, given the code is pretty clear as to what its doing.
Check ValidationRunner.load_posterior_sbi() works even if the posterior does not have a name attribute
I just removed this functionality of load_posterior_sbi
, because there should never be a case wherein an sbi posterior doesn't have a name
. It is given ""
by default.
This PR implements numerous structural changes to the configurations of the Data/Inference/Validation stages, without much changes to how they work on the backend. The general theme of this PR is to add various quality-of-life improvements that make it easier to understand the configurations and use ltu-ili for distributed testing. Here's a summary of what has changed.
Removed unnecessary configurations:
backend
in the ValidationRunner. This is now gathered automatically from the imports.n_walkers
in DelfiRunner (inference stage). It didn't make sense to specifyn_walkers
before the sampling stage, and it is redefined anyway in the EmceeSampler during validation.n_data
andn_params
in the DelfiRunner configuration. This info is now gathered automatically from the provided dataloader.n_data
configuration of the FCN network. It was simply the dimensionality of the output of the network. It can now be configured as the last entry inn_hidden
inference_class
argument of the inference Runner objects and replaced it withengine
. Previously, users had to load an engine object from file before passing it into the Inference runners (e.g.inference_class = sbi.inference.SNPE
. Now users only need to specify a string (e.g.engine='NPE'
or NLE/NRE/SNPE etc) and we figure out what objectinference_class
dependent on the backend.out_dir
,in_dir
, etc) can be specified withstr
inputs instead ofpathlib.Path
. We convert them all toPath
's on the backend.Renamed things to make them more obvious:
output_path
->out_dir
andposterior_path
->posterior_file
to match the naming conventions in loaders.py. Now, the ValidationRunner will only load the posterior fromout_dir / posterior_path
and subsequently save metrics inout_dir
. I didn't see a reason why one would want to save metrics in a different directory from their posterior.output_path
->out_dir
in the Inference stages.Note, The old imports (e.g.
from ili.inference.runner_sbi import SBIRunner
still work for backwards compatibility.Conformed the multiple Inference backends to a universal configuration:
InferenceRunner
in ili/inference/runner.py to act as a universal engine for the Inference stage. It's configuration takesbackend
andengine
as parameters, and then uses these to go off and find the write object for your inference. For example,backend='sbi'
andengine='NLE'
will load the configuration inSBIRunner
, whereasbackend='sbi'
andengine='SNLE'
will load it inSBIRunnerSequential
. This new configuration should be preferred, but it does not affect the way we previously specified SBIRunner/SBIRunnerSequential/DelfiRunner etc, so it should be backwards compatible. Adds appropriate tests.load_nde_sbi
andload_nde_pydelfi
which provide utilities for loading neural architectures from given configurations for each backend. This unifies the configuration with which we specify neural architectures between sbi and pydelfi, making things considerably simpler. These functions were used to refactorSBIRunner
andDelfiWrapper
. Adds appropriate tests.from_config
function of the Data/Inference/Validation stages. This can be very useful if you want to iterate (in python) over many small changes to a configuration file. For example, one could change the training rate and theout_dir
to save many models to different directories, each of which trained with a different training rate. Addresses #119DelfiWrapper
's andEmceeSampler
's sampler functions were fixed to only producensteps
samples, matching the behavior pyro samplers. Previously, they were producingnsteps*num_chains
samples.Miscellaneous:
_BaseRunner
in ili/inference/base.py as a template for sbi/pydelfi runnersNote a lot of files have changed, because we needed to propagate the naming changes to all the examples and tests. I'm happy to walk the reviewer through the listed changes.