Closed jdenholm closed 9 months ago
Ideally we want to check the fortran NN against the original python to ensure the Fortran NN is doing exactly what the python was trained to do.
At present we only have weights in a netcdf file used for populating the fortran. Ideally we also want a pickle or pytorch file of trained weights so that we can compare/validate directly to the native trained data.
If we can obtain this data it would allow us to perform this check using an independent ground truth.
This might be in Yannis home directory so some investigation is needed. @paogorman Suggests that the training code saves a pickle of the weights. Yanni did do cross validation manually so we could close this. We should instead prioritise a more general integration test.
Idea of a simple test from meeting: if the NN predicts no change (to the field) then passing it back through the interface should also produce 0 effect (or very small) as a test of the interpolation.
Closing this as it is barely started, stale, and tests now being added as part of #44
Closes #13.
Broadly, now that we have the neural network in PyTorch and Fortran, we need to add appropriate tests to ensure the various components/functions and behaving as we expect.
In this PR, we:
README.md
.Add tests for the neural network in Pytorch.Add tests to ensure reporducability between the Fortran and PyTorch models.