huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.56k stars 26.91k forks source link

Whether the OutEffHop can support with Transfomers #31046

Open robinzixuan opened 5 months ago

robinzixuan commented 5 months ago

Feature request

We request that the option "OutEffHop" be included in the BERT, OPT, and ViT models.

Motivation

I am the author of OutEffHop and we plan to release our OutEffHop-based model on HuggingFace. However, because we use a different activation function (not softmax) in attention, if we directly use the original structure, the model inference online will show a huge problem.

Your contribution

We are able to supply the code for OutEffHop and assist with integrating the function into BERT and OPT. Additionally, if necessary, we can also help incorporate OutEffHop into ViT. My initial idea is to add an option in the model configuration that allows the user to choose whether to use OutEffHop. If they opt to use it, they can employ the new activation function.

amyeroberts commented 5 months ago

Hi @robinzixuan, thanks for opening this feature request!

Adding a new options for existing models, especially something as specific as this is normally something we try to avoid in transformers.

If the attention mechanism is the only thing that's different for the models, I think the easiest approach would be having an attention class e.g. similar to BertSdpaSelfAttention which can be selected as an option through the config. The easiest and recommended way to make a model available in transformers is to add the modeling code directly on the hub: https://huggingface.co/docs/transformers/custom_models

cc @ArthurZucker @younesbelkada (as this might be related to quantization from the arxiv paper)

robinzixuan commented 4 months ago

Thank you, we are the authors of OutEffHop. I have successfully set up the OutEffHop BERT and OPT-125m versions. However, the inference API still appears to be incorrect. What should I do for that

amyeroberts commented 4 months ago

However, the inference API still appears to be incorrect. What should I do for that

Could you explain a little bit about the errors or incorrect behaviour you're encountering?