huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
131.91k stars 26.27k forks source link

Model resources contribution #20055

Closed stevhliu closed 10 months ago

stevhliu commented 1 year ago

Hi friends! 👋

There are a lot of cool existing resources for how to do x with x model, and we’d like to showcase and aggregate these resources on a model’s documentation. This’ll help users see how they can get started with a model for their own tasks since we know a lot of users check out the model documentation first. Take a look at a completed resource section for DistilBERT as an example.

I’ve identified the top 20 models by pageviews, and now I’d like to open it up to the community if anyone is interested in helping!

Anyone can contribute; you just need to comment and claim one of the models on this list. Contributing is super easy:

  1. Once you've claimed a model from the list, collect the existing resources from:

  2. Organize the resources by model tasks or applications (like inference or deployment):

    • Use the corresponding icons for each task (you can find the names for each icon here):
      <PipelineTag pipeline=”name-of-task”/>
    • For certain categories, you can just do: 🚀 Deploy, ⚡️ Inference, or ⚗️ Optimization, etc.
    • For community resources, add the 🌎 emoji at the end to indicate it’s not an official Hugging Face resource.
    • Use this DistilBERT file as a template. You can copy and paste the intro text and just replace DistilBERT with the name of the model you're working on.
  3. Open a Pull Request with the new resources for your chosen model and ping me for a review (if you’re just getting started with contributing to an open-source project, check out @merveenoyan's awesome GitHub Contribution Guide).

  4. Congratulations, you just merged a PR into 🤗 Transformers, and your contribution will now help anyone who is looking at the model docs! 🎉

If you have any questions or need any help, don’t hesitate to ping me! 🤗❤️

shogohida commented 1 year ago

Hi @stevhliu, I want to work on OpenAI GPT!

stevhliu commented 1 year ago

Awesome! I'm looking forward to your contribution, and feel free to ping me if you have any questions! 🤗

shogohida commented 1 year ago

@stevhliu I have a question. Is there a good way to search GitHub and blog posts? I tried to find related repos and blog posts with the word OpenAI GPT but I couldn't find them because search function doesn't seem to work well... Should I search one by one repo or post?

I made a draft pull request although it doesn't have links of GitHub and blog. You can check it to see if my research has been good or not https://github.com/huggingface/transformers/pull/20084

stevhliu commented 1 year ago

Hey @shogohida, thanks for starting on this!

The easiest way I've found for searching the blog posts is to go to the blog repo and search for mentions of GPT inside the repo. Then you can take a look at the results and see what's relevant!

For GitHub materials, you only have to look at the example scripts, and notebooks and see what task your model can be applied to. For example, OpenAI GPT is a casual language model, so you can link to example scripts for causal language modeling and also text generation. You can link the equivalent scripts in TensorFlow and Flax if they're available.

After the scripts, you can hop over to the notebooks and see what task your model can be applied to (language modeling, generate text) and do the same thing for the community notebooks!

shogohida commented 1 year ago

@stevhliu Thanks for your comment! It will take a lot of time to collect resources from scripts and notebooks because I'm not very familiar with OpenAI GPT but I'll do my best. I'll let you know if I have another question

ambujpawar commented 1 year ago

Hi, I would like to take CLIP from the list you have mentioned. :)

stevhliu commented 1 year ago

That's great @ambujpawar! I'm looking forward to your contribution, and feel free to ping me if you have any questions! 🤗

Saad135 commented 1 year ago

@stevhliu I would like to work on DeBERTa

stevhliu commented 1 year ago

Great, thanks for taking on DeBERTa @Saad135! 🤗

JuheonChu commented 1 year ago

Hello, do you mind if I can tackle on ALBERT model? @stevhliu

stevhliu commented 1 year ago

For sure, looking forward to your contribution @JuheonChu! 🤗

stanleycai95 commented 1 year ago

Hi! Could I try ViT? It might take me some time though as have some work projects to complete too.

hazrulakmal commented 1 year ago

Hi, I would like to work on XLM-RoBERTa! @stevhliu

stevhliu commented 1 year ago

Hey @stanleycai95, that would be great! Feel free to work on it when you have the time :)

Awesome, XLM-RoBERTa is all yours @hazrulakmal!

adit299 commented 1 year ago

Hi, I would like to work on GPT-J! @stevhliu

stevhliu commented 1 year ago

Yay thanks for taking on GPTJ @adit299! Let me know if you have any questions or need any help 🤗

alissadb commented 1 year ago

Hi, could I work on OPT? :) @stevhliu

stevhliu commented 1 year ago

OPT is all yours @alissadb! 🤩

Laxmaan commented 1 year ago

Let me round out the list @stevhliu . TrOCR

stevhliu commented 1 year ago

Awesome, thanks for finishing this off @Laxmaan! 🎉

elabongaatuo commented 1 year ago

Hello @stevhliu . I'd love to contribute in documentation. I see all models are assigned, is there any other I can help with? Thank you 😊

stevhliu commented 1 year ago

Hi @elabongaatuo, sorry for the late reply and thanks for your enthusiasm!

I think we are good with the model resource contributions for now. If you're looking for ways to contribute to the docs, feel free to open an issue for improving the docs (content that is unclear, missing, or inaccurate or fixing typos) and we can review it there. For more info about getting started with contributing, take a look at this guide! 🤗

elabongaatuo commented 1 year ago

Hello @stevhliu . Thanks for getting back to me. I'll be on the lookout for docs that need improving.

stevhliu commented 1 year ago

Hi @JuheonChu and @Laxmaan, I wanted to check and see if you're still interested in making a model contribution. Totally cool if you aren't available anymore, I'll unassign the models you claimed and let others take a shot at it. Thanks!

huangperry commented 1 year ago

Hi @stevhliu, I'd like to take a shot at one of the models if one of them becomes unassigned. Please let me know!

stevhliu commented 1 year ago

Thanks for the interest; TrOCR, LayoutLMV2, and ALBERT are now available!

elabongaatuo commented 1 year ago

Hello @stevhliu. I'd like to take up ALBERT.

huangperry commented 1 year ago

Thanks for the interest; TrOCR, LayoutLMV2, and ALBERT are now available!

I’d like to take TrOCR!

stevhliu commented 1 year ago

All yours! Happy contributing and feel free to let me know if you have any questions! 🤗

subham73 commented 1 year ago

Thanks for the interest; TrOCR, LayoutLMV2, and ALBERT are now available!

Hello!! @stevhliu I don't have any option I guess 😅. LayoutLMV2 for me then 🌏.

stevhliu commented 1 year ago

Hi @subham73, LayoutLMv2 is actually done haha!

Girish16 commented 1 year ago

Hi @stevhliu are there any open issues to work on :)

stevhliu commented 1 year ago

Hi, thanks for your interest @Girish16!

Feel free to browse Good First Issues for open issues to work on, and you can also check out the Contribution guide for more ways to contribute! 🤗

ENate commented 1 year ago

Hi. Is ALBERT still available?

stevhliu commented 1 year ago

Hi @ENate, ALBERT is currently being worked on in #23685. If the original contributor is no longer interested in working on it, I'll let you know! 😄

ENate commented 1 year ago

No worries thanks :) .

elabongaatuo commented 1 year ago

@stevhliu hello, @ENate can take it up. 😊

ENate commented 1 year ago

Okay then. Will proceed using the guidelines provided by @stevhliu and the example for DIstilBERT.

ENate commented 1 year ago

@stevhliu - I saw that there is a resource for ALBERT at:

https://huggingface.co/docs/transformers/main/en/model_doc/albert

which is similar to the resources for DistilBERT you mentioned in the guidelines above at:

https://huggingface.co/docs/transformers/main/en/model_doc/distilbert#resources 
stevhliu commented 1 year ago

Yeah ALBERT only has the task guides, and it doesn't go quite as in-depth as DistilBERT. For example, DistilBERT includes links to the course, notebooks, and scripts. You can probably just copy over most of the content from DistilBERT that is relevant to ALBERT (in other words, replace DistilBERTForX with ALBERTForX)!

ENate commented 1 year ago

Thanks :) @stevhliu

daniela-basurto commented 1 year ago

Hello @stevhliu is Jukebox still available?

stevhliu commented 1 year ago

Feel free to open a PR for Jukebox @daniela-basurto! 🤗

wonhyeongseo commented 1 year ago

Hello @stevhliu may I please take up whisper with a few of the OSSCA mentees?

Cc: tysm @ArthurZucker for the pointer! We'll start compiling models with incomplete resource tabs so our mentees can work on them.

stevhliu commented 1 year ago

Yes absolutely, thanks for your interest @wonhyeongseo!

wonhyeongseo commented 1 year ago

I ran a simple grep -wL * -e "## Resources" command, and a total of 150 out of 222 documents would benefit from this issue. I'm not sure if all of these are open for contributions though. Below is the todo list with contributors I saw recently.

Model Docs in need of resources [Updated 23-07-22]:
- [ ] albert - [ ] altclip - [ ] bark - [ ] barthez - [ ] bartpho - [ ] bert-generation - [ ] bert-japanese - [ ] bertweet - [ ] big_bird - [ ] bigbird_pegasus - [ ] biogpt - [ ] blenderbot-small - [ ] blenderbot - [ ] bridgetower - [ ] byt5 - [ ] camembert - [ ] canine - [ ] chinese_clip - [ ] clap - [ ] codegen - [ ] conditional_detr - [ ] convbert - [ ] cpm - [ ] cpmant - [ ] ctrl - [ ] deberta-v2 - [ ] decision_transformer - [ ] deplot - [ ] dialogpt - [ ] dinov2 - [ ] donut - [ ] dpr - [ ] efficientformer - [ ] efficientnet - [ ] electra - [ ] encodec - [ ] encoder-decoder - [ ] ernie - [ ] ernie_m - [ ] esm - [ ] flan-t5 - [ ] flan-ul2 - [ ] flaubert - [ ] flava - [ ] fnet - [ ] focalnet - [ ] fsmt - [ ] funnel - [ ] gpt-sw3 - [ ] gpt_bigcode - [ ] gpt_neo - [ ] gpt_neox - [ ] gpt_neox_japanese - [ ] gptsan-japanese - [ ] graphormer - [ ] herbert - [ ] hubert - [ ] ibert - [ ] instructblip - [ ] jukebox @daniela-basurto - [ ] layoutxlm - [ ] led - [ ] llama @wonhyeongseo and OSSCA - [ ] llama2 - [ ] longformer - [ ] longt5 - [ ] luke - [ ] lxmert - [ ] m2m_100 - [ ] marian - [ ] markuplm - [ ] matcha - [ ] mbart - [ ] mega - [ ] megatron-bert - [ ] megatron_gpt2 - [ ] mgp-str - [ ] mluke - [ ] mms - [ ] mobilebert - [ ] mobilevitv2 - [ ] mpnet - [ ] mra - [ ] mt5 - [ ] musicgen - [ ] mvp - [ ] nezha - [ ] nllb-moe - [ ] nllb - [ ] nystromformer - [ ] owlvit - [ ] pegasus - [ ] pegasus_x - [ ] perceiver - [ ] phobert - [ ] plbart - [ ] prophetnet - [ ] qdqbert - [ ] rag - [ ] realm - [ ] reformer - [ ] rembert - [ ] roberta-prelayernorm - [ ] roc_bert - [ ] roformer - [ ] rwkv - [ ] sam - [ ] sew-d - [ ] sew - [ ] speech-encoder-decoder - [ ] speech_to_text - [ ] speech_to_text_2 - [ ] speecht5 - [ ] splinter - [ ] squeezebert - [ ] swiftformer - [ ] t5v1.1 - [ ] tapas - [ ] timesformer - [ ] todo - [ ] tvlt - [ ] ul2 - [ ] umt5 - [ ] unispeech-sat - [ ] unispeech - [ ] vilt - [ ] vision-encoder-decoder - [ ] vision-text-dual-encoder - [ ] visual_bert - [ ] vivit - [ ] wav2vec2-conformer - [ ] wav2vec2_phoneme - [ ] wavlm - [ ] whisper @wonhyeongseo and OSSCA - [ ] xglm - [ ] xlm-prophetnet - [ ] xlm-roberta-xl - [ ] xlm-v - [ ] xlm - [ ] xlnet - [ ] xls_r - [ ] xlsr_wav2vec2 - [ ] yoso
ahtashamilyas commented 1 year ago

Ok.

stevhliu commented 1 year ago

I'm not sure if all of these are open for contributions though.

Thanks for checking @wonhyeongseo! I think it would be nice to eventually have Resources for all the models, so if you see other ones you're interested in contributing to, feel free to open a PR! I would focus on the more high-impact models first (like LLaMA) that get more pageviews/usage. For certain models (like BORT) that are in maintenance mode, we can skip those entirely.

wonhyeongseo commented 1 year ago

Awesome @stevhliu , thank you so much for your warm reception.

Thank you for the heads up for files under maintenance! I've deleted 8 of those from the above https://github.com/huggingface/transformers/issues/20055#issuecomment-1645188309 list by `grep -LZ "## Resources" * | xargs -0 grep -l ""` :
```js auto.md bort.md mctct.md open-llama.md retribert.md tapex.md trajectory_transformer.md transfo-xl.md ```

Thank you so much for your support @stevhliu . Hope you have a wonderful weekend!

Best regards, Won Seo

stevhliu commented 1 year ago

May we please reserve LLaMA as well for the OSSCA team?

For sure! 👍

In your opinion, when is the ideal time to start gathering resources after a model's release?

I think maybe whenever you see some content, you can open a PR to add it to the model page. It's ok if it's just one guide/tutorial/blog post; we can gradually add to it as more content and resources get created. For example, Philipp has a blog post about fine-tuning LLaMA 2 on SageMaker here that can be added :)

Although I think this is already the case, would it be possible for you to sort these incomplete models and provide the top 20 sorted by impact or page views as of recent advances?

By downloads, here are the next top 20 models (its okay to skip some of the models if there aren't any available resources for them):

BART CLIPSeg Marian MPNet ELECTRA ResNet CamemBERT HuBERT LLaMA Longformer VisionEncoderDecoder GPT NeoX EnCodec ConvBERT mBART GPT Neo FNet YOLOS BLIP BEiT