huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
132.11k stars 26.32k forks source link

Community contribution - `BetterTransformer` integration for more models! #20372

Closed younesbelkada closed 11 months ago

younesbelkada commented 1 year ago

BetterTransformer integration for more models!

BetterTransformer API provides faster inference on CPU & GPU through a simple interface!

Models can benefit from very interesting speedups using a one liner and by making sure to install the latest version of PyTorch. A complete guideline on how to convert a new model has been created on the BetterTransformer documentation!

Here is a list of models that could be potentially supported, pick one of the architecture below and let's discuss about the conversion!

Text models 🖊️ :

Vision models 📷 :

Audio models 🔉 :

Let us also know if you think that some architectures can be supported that we missed. Note that for encoder-decoder based models below, we expect to convert the encoder only.

Support for decoder-based models coming soon!

cc @michaelbenayoun @fxmarty

https://github.com/huggingface/optimum/issues/488

JuheonChu commented 1 year ago

May I ask you a question if there is anyway that we can see a "BetterTransformer" features for models that already have like Tapas or FSMT?

fxmarty commented 1 year ago

Hi @JuheonChu , you can see it here: https://github.com/huggingface/optimum/pull/520/files , https://github.com/huggingface/optimum/pull/494/files

HVjay commented 1 year ago

Hey @younesbelkada, happy to take a look at any remaining model that needs integration!

mszsorondo commented 1 year ago

Same here @younesbelkada, any model you need

fxmarty commented 1 year ago

Hi @HVjay @mszsorondo don't hesitate to pick any model you are interested in and is using a classic encoder attention + feed-forward architecture! You can open an PR in Optimum, and if you need help we'll guide you from there.

manish-p-gupta commented 1 year ago

Understood. Will go through the docs and open a PR in optimum. Any other things I should take care of?

Hi @younesbelkada , Opened a PR for RoFormer integration for BetterTransformer form my other Github Account. But please kindly let me know if I missed anything.

HVjay commented 1 year ago

Hi @fxmarty wanted to confirm that Conditional Detr - ConditionalDetrEncoderLayer can be supported!

younesbelkada commented 1 year ago

Hi @HVjay, Thanks for your interest! I think Detr can be supported as well as ConditionalDetr as it seems to use classic attention mechanism - this can be also confirmed by the paper that states that the method uses classic transformer-based models. However note that only the encoder part can be converted.

Hi @mszsorondo, Thank you for your message! Recently BLIP has been added, the model should support BetterTransformer integration (Vision + text)

younesbelkada commented 1 year ago

Hi @HVjay ,

Actually there is already someone working on Detr, check: https://github.com/huggingface/optimum/pull/684

JanFidor commented 1 year ago

Hi @younesbelkada , could I pick up RoFormer ?

dewasahu2003 commented 1 year ago

@sushmanthreddy are you doing Detr anymore...? if doing please tell

dewasahu2003 commented 1 year ago

@younesbelkada Hi 👋 could I take Speech2Text 🙂

y3sar commented 1 year ago

@younesbelkada Hello, I would love to contribute to this issue. I am new to contributing in transformers. Can you please tell me which of the model layers are vacant I would like to take one up :)

awinml commented 1 year ago

@younesbelkada I would like to work on Detr.

@mszsorondo Are you still working on it? There has not been any activity on your PR since Jan 8. I can pull from your PR and fix the failing tests.

dewasahu2003 commented 1 year ago

@awinml I actaully submitted the pr for Detr Model

awinml commented 1 year ago

@dewasahu2003 No problem.

Its always better to inform the original author and pull from their PR so they get due credit. Hence the question was aimed at @mszsorondo.

dewasahu2003 commented 1 year ago

@younesbelkada Hey 👋

I have submitted the pr for BetterTransformer for detr I mentioned you there PR From next time i would keep in mind to ask pr authors

mobley-trent commented 1 year ago

Hi, @younesbelkada I'd like to work on ProphetNet 😀

mszsorondo commented 1 year ago

@younesbelkada I would like to work on Detr.

@mszsorondo Are you still working on it? There has not been any activity on your PR since Jan 8. I can pull from your PR and fix the failing tests.

Go for it! Sorry for the delay

Jack-Chuang commented 1 year ago

Hi @younesbelkada, @michaelbenayoun, and @fxmarty,

I would like to work on Speech2TextLayer.

What are the next steps in getting started?

Thank you!

jucamohedano commented 1 year ago

Hi! @younesbelkada @michaelbenayoun @fxmarty I'm interested in adding support for one of the models in the list. Although, I believe that the only model left might be Speech2TextLayer and has been claimed by @Jack-Chuang

mobley-trent commented 1 year ago

Hello @younesbelkada @fxmarty and @michaelbenayoun I would like to work on the RoFormer layer since I saw that someone had already worked on ProphetNet. Has the model been claimed ?

RoboTuan commented 1 year ago

Hello @younesbelkada @fxmarty and @michaelbenayoun I would love to help you with the integration of more models for BetterTransformer! I'm happy to take what is left since a lot of developers are already contributing to most of the models I think. Let me know if I can still help with something!

mohammedElfatihSalah commented 1 year ago

@younesbelkada is there anything I can help with in this issue?

deepwilson commented 12 months ago

@younesbelkada please could you update the original list of pending items? Or has this project been stalled?

sam-h-bean commented 11 months ago

Is splade possible?

ghost commented 11 months ago

Hi @younesbelkada ! I'm new to the Open Source community but have good experience with torch, transformers, numpy, etc. can I be assigned the RoFormer task, I'd like to give it a shot!

adeepbiswas commented 11 months ago

Hi @younesbelkada, Can I take up ProphetNet task? I'm new to open source and might take some time but eager to try my hands at this.

younesbelkada commented 11 months ago

Hi everyone, Sorry for the delay in replying to this issue and community contribution - we had some internal discussion and we decided to migrate the BetterTransformer API into transformers core by directly supporting torch.scaled_dot_product_attention in the modeling files. Check out this issue: https://github.com/huggingface/transformers/issues/26557 for more details and this PR for the PoC: https://github.com/huggingface/transformers/pull/26572 We will possibly open a community contribution for that to extend the support for all architectures but not sure. I will keep you all posted! Thanks again for all your effort and amazing contribution! 🎉

vu0607 commented 7 months ago

Hi @younesbelkada, @michaelbenayoun, and @fxmarty The model type vision-encoder-decoder is not yet supported to be used with BetterTransformer !!! Hope you to support soon <3