modern-fortran / neural-fortran

A parallel framework for deep learning
MIT License
395 stars 82 forks source link

scaling #165

Open castelao opened 9 months ago

castelao commented 9 months ago

Is there already a solution to scale the input? I'm currently scaling my features manually before feeding into nf. It would be nice if we could incorporate such scaling in the nn definition to avoid mistakes. Would it make sense to create something like a normalizing/scaling layer? So we can pass two values - 'reference' and 'scale' - per feature to set_params().

milancurcic commented 9 months ago

There's not and I agree there should be.

Do you think it's justified for it to be its own layer (rather than just a Fortran function)? At first thought it seems like overkill, but there may be good use cases. For example, if that's what popular frameworks do, it may be a good design choice to do the expected thing and provide it as a layer. Also, if there's any application that requires scaling anywhere inside the network (i.e. not for just input or output), then it definitely should be a layer.

(sorry for the delay with the testing and the review; getting around to it)

castelao commented 9 months ago

My motivation was to have a more robust and well-defined NN. I'm afraid of the situation when I re-calibrate an NN with a slightly different normalization and forget to update that step in the application. It doesn't need to be a layer by itself, but it would be nice to explicitly define the normalization that I'm using together with the other coefficients.

https://keras.io/api/layers/normalization_layers/

Don't worry about delaying the other PR. I've been running with my branch, so it's not holding me. Whenever it's ready, I'll flip to the official repo. And congrats on your new position, I can imagine how busy you've been.