modern-fortran / neural-fortran

A parallel framework for deep learning
MIT License
395 stars 82 forks source link

Refactor `forward` and `backward` methods to allow passing a batch of data instead of one sample at a time #156

Open milancurcic opened 1 year ago

milancurcic commented 1 year ago

In support of #155.

This will impact the forward and backward methods in:

Effectively, rather than looping over sample in a batch inside of network % train, we will pass batches of data all the way down to the lowest level, that is, the forward and backward methods of dense_layer and conv2d_layer types. Lowering the looping over the sample in a batch will also allow the implementation of a batchnorm_layer.

It will also potentially allow more efficient matmuls in dense and conv layers if we replace the stock matmul with some more specialized and efficient sgemm or similar from some flavor of BLAS or MKL.