From a discussion with @alessiospuriomancini and @htjense, we came up with a proposal for a specification for a yaml file which packages a cosmopower network.
The aims are for this packaging to:
Enable replicability/reusability and distribution of networks
Ensure 'safe' use of networks (e.g. only within trained parameter ranges)
Allow for fallback to the code being emulated (e.g. by including the full list of settings used in the code during training).
Allow automated enhancement of the training set (e.g. with reinforcement learning)
Note that the aim for this is to be flexible enough to work for things other than Boltzmann codes, and (I think) the interface with inference codes such as cobaya and cosmosis should be managed within those packages.
A fuzzy proposal for this specification is here (inspired by the one for camb from @htjense attached):
network_name:
emulated_code:
name:
version:
samples:
N_training:
xmin:
xmax:
xbinning:
extra_args:
{non-default arguments that were used in the emulated code}
full_args_file: {file containing the full arguments used in the emulated code}
networks:
{observable_name}:
type: NN
log: True
n_traits:
n_hidden: [ ]
training:
validation_split:
learning_rates: [ ]
batch_sizes: [ ]
gradient_accumulation_steps: [ ]
patience_values: [ ]
max_epochs: [ ]
sampled_parameters:
{par1}: [ , ]
{par2}: "lambda par1: 1e-10 * np.exp(par1)"
drop: [ par1 ]
derived: [ ]
From a discussion with @alessiospuriomancini and @htjense, we came up with a proposal for a specification for a yaml file which packages a cosmopower network.
The aims are for this packaging to:
Note that the aim for this is to be flexible enough to work for things other than Boltzmann codes, and (I think) the interface with inference codes such as cobaya and cosmosis should be managed within those packages.
A fuzzy proposal for this specification is here (inspired by the one for camb from @htjense attached):
lcdm.yaml.txt