modern-fortran / neural-fortran

A parallel framework for deep learning
MIT License
395 stars 82 forks source link

Implement `batchnorm` layer #155

Open milancurcic opened 1 year ago

milancurcic commented 1 year ago

Originally requested by @rweed in #114.

A batch normalization is possibly the next most widely used layer after dense, convolutional, and maxpooling layers, and is an important tool in optimization (accelerating training).

For neural-fortran, it will mean that we will need to allow passing a batch of data to individual layer's forward and backward methods. While for dense and conv2d layers it may also mean an opportunity to numerically optimize the operations (e.g. running the same operation on a batch of data instead of over one sample at a time), for a batchnorm layer it is required because this layer evaluates moments (e.g. means and standard deviations) over a batch of inputs to normalize the input data.

Implementing batchnorm will require another non-trivial refactor like we did to enable generic optimizers. However, it will probably be easier. The first step will be to allow the passing of a batch of data to forward and backward methods, as I mentioned above. In other words, this snippet:

https://github.com/modern-fortran/neural-fortran/blob/b119194a6472e1759966ef9b0ee02dc66ddb1a3a/src/nf/nf_network_submodule.f90#L587-L590

after the refactor, we should be able to write like this:

call self % forward(input_data(:,:))
call self % backward(output_data(:,:))

where the first dim corresponds to inputs and outputs in input and output layers, respectively, and the second dim corresponds to multiple samples in a batch. I will open a separate issue for this.

@Spnetic-5 given limited time in the remainder of the GSoC program, we may be unable to complete the batchnorm implementation, but we can make significant headway on it for sure.

References