Open NohTow opened 4 months ago
Done in #37.
Letting this opened since we still can do better by loading the weights of linear layers for ST models pre-v3 that are not in a separate layers, such as https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2.
Right now, when initializing from a ST checkpoints, we chop-off the eventual "Dense" module. Although these checkpoints require training anyways, this layer can be a good initialization for the linear projection.
We can either merge the LinearLayer class into a Dense one (as they are basically the same (with the exception of the activation function, which could be set to None with a small modification to the original class), or we can copy the weights into the LinearLayer.
We should take care of the possible difference in output dimension compared to the configuration and either prevent it from being loaded or showing a warning.