Closed odow closed 1 month ago
Admittedly, this is a simple affine transformation, but it is one that is almost always needed when embedding an ML model into an optimization problem. So, automating this adds a convenience factor.
Moreover, I find it best practice to ship trained ML models with the preprocessing layers that encode and decode the inputs and outputs, respectively. See https://www.tensorflow.org/guide/keras/preprocessing_layers#benefits_of_doing_preprocessing_inside_the_model_at_inference_time. Supporting these types of layers helps to simplify the workflow and reduce the chance of modelling errors. For instance, if I train a Keras or PyTorch NN model and embed the normalization as a layer for inference, I would be ideal to then just have MathOptAI just read in that model such that I wouldn't need to worry about normalizing the variables. Otherwise, I would have manually look up the scaling values in Keras or PyTorch and then input these manually as transformations in MathOptAI.
What PyTorch normalization layers do you want support for?
Let's follow (F)lux and call this Scale(scale::Vector{T}, bias::Vector{T})
.
Let's follow (F)lux and call this
Scale(scale::Vector{T}, bias::Vector{T})
.
That works, I have mostly used Keras in the past, so I am not sure what the equivalent layer is in PyTorch.
It probably just needs to be:
But @pulsipher thinks this is useful in #82.