Open SCiarella opened 1 month ago
I see that the current code is prone to error in the following case:
train a-priori, the batch_size
is used (i.e. manually and explicitly specified) in preprocess_priori.jl
to create the dataloader. Then in train_priori.jl
for the creation of the callback there is a new batch_size
that needs to be passed. I do not think is problematic if they do not match exactly but is weird that we do not have it as a global param of the training.
It can be useful to collect all the relevant variables like:
into a single
configuration_x.yaml
. This will allow us to rungenerate.jl configuration_x.yaml
orpostprocess.jl configuration_y.yaml
on a server without using a REPL. This approach is better than training all the different models via a singletrain_array.jl
because it allows more flexibility.