Open jwallwork23 opened 2 months ago
To investigate: is Torch underneath smart enough to do this, or will we have to loop?
It would still be good to implement this on the FTorch side, but worth noting that batching can be incorporated into the pytorch side (to then take arbitrary sized (in one dimension) Fortran arrays) with a little thought and care.
This is what was done for MiMA here: https://github.com/DataWaveProject/MiMA-machine-learning/blob/ML/src/shared/pytorch/arch_davenet.py Though it is not the easiest to follow.
Might still be worth comparing performance between implementing this on the Fortran side vs Torch.
When we run
examples/1_SimpleNet/simplenet.py
, the final thing that's executed is effectivelyThis would also work with batching e.g.,
We should enable calling batching in this way on the FTorch side, too.