huggingface / huggingface-llama-recipes

535 stars 60 forks source link

Call for contributions #43

Open ariG23498 opened 1 month ago

ariG23498 commented 1 month ago

πŸŽ‰ Open Call for Contributions to the LLaMA Recipes Repository

Hey there! πŸ‘‹

We are excited to open up our repository for open-source contributions and can't wait to see what recipes you come up with! πŸ§‘β€πŸ³ This is a collaborative space where we develop and share scripts, notebooks, and resources for working with Llama models using the Hugging Face ecosystem.

πŸš€ Projects We'd Love Your Help With

Below is a list of projects we're eager to get started on. Each project is an opportunity to contribute and make a meaningful impact:

✨ Have your own idea? Feel free to propose new projects! Open an issue to suggest your idea, and we'll be happy to discuss and potentially add it to the list.

πŸ“ How to Contribute

  1. Open a New Issue

    • If you're interested in any of the projects above or have a new idea, please open a new issue with a title that reflects the project.
    • Use the issue to discuss your approach and gather feedback.
  2. Let Us Know

    • Comment on this issue to let us know you've opened a new issue. We'll update the project list with a link to your issue.
  3. Start Coding

  4. Submit a Pull Request

    • Once you're ready, open a Pull Request (PR) linking to the issue you created.
    • Make sure to tag the issue in your PR description for easy reference.
  5. Update the README

    • Don't forget to update the README.md to include your example or project, so others can easily find and use it.

πŸ“š For New Contributors

If you're beginning your open-source journey, we recommend reading our Contribution Guide first. It contains valuable information to help you get started.

Looking forward to what we build together!

Zhreyu commented 1 month ago

Hey @ariG23498 , Thanks for the open call! I’ve opened an issue to work on Gradio demos for Llama models. You can check it out here

sergiopaniego commented 1 month ago

Hi @ariG23498 and the rest of the community! I'd like to contribute with a notebook covering:

I'll be creating the issue so we can track and comment it πŸ˜„

sergiopaniego commented 1 month ago

Hi @ariG23498 and the rest of the community! I'd like to contribute with a notebook covering:

I'll be creating the issue so we can track and comment it πŸ˜„

I've opened an issue to track the development #45 πŸ€—

AnirudhJM24 commented 1 month ago

Hi @ariG23498 thanks for the open call! I've opened a new issue #46 to

This will be helpful for students who want to set up an inference server with minimal to no cost with familiar tools

Solobrad commented 1 month ago

I've opened a new issue related to this open call. It’s about some thoughts and ideas regarding RAG and LLaMA.

farukalamai commented 1 month ago

Hey, @ariG23498 Thanks for the open call! I have opened an issue. Check #48

Sakalya100 commented 1 month ago

Hi @ariG23498

Thanks for the open call! I have opened an issue. Check #50

Purity-E commented 1 month ago

Hi @ariG23498 , Thanks for the open call. I'm eager to contribute on the project Implementing RAG with Llama. I've opened this issue here

Sakalya100 commented 1 month ago

Hi @ariG23498 I have opened an issue related to the open call. Please check #54. Hope this is relevant in the scope

WazupSteve commented 1 month ago

Hey @ariG23498 , Thanks for the open call! I’m adding a comment here to ask if this can be part of the cookbook. A recipe that explores fine-tuning LLaMA using reinforcement learning. This involves setting up reward functions and creating pipelines for RL-based fine-tuning. A different LLaMa model can be used as a teacher model to guide training for smaller models or set-up an environment for the model to do CoT like o1. The issue is raised #56

emre570 commented 1 month ago

Hello, I can build a simple RAG App using Llama 3. I opened issue of #61. Thanks!

atharv-jiwane commented 1 month ago

Hey @ariG23498, I am eager to contribute for the first time and want to build a simple RAG pipeline. I have opened issue #62. Thanks so much for the opportunity!

MayankChaturvedi commented 1 month ago

Hi @ariG23498 I'm researching ways to make notebook for #64 Would love to know your opinion if this is relevant and in the scope of what we want to build : )

silvererudite commented 1 month ago

hi @ariG23498 , I opened two issues #59 and #55 . Pls have a look and let me know your thoughts. Thanks!

pardeep-singh commented 1 month ago

@ariG23498 Can we add the fine tuning notebooks like the following ones:

Also happy to get feedback on how this can be made more useful wrt recipes use case?

ariG23498 commented 1 month ago

Hey @pardeep-singh πŸ‘‹

I would love to see more fine-tuning recipes, but both the kaggle notebooks seem to be very similar to fine tune with peft. Would you first like to look at the fine tuning scripts already mentioned in the repo, and then try to take a stab at any other proposal?

We are looking for simple and small notebooks, which can get a beginner to get started with the Llama family of models. While an E2E solution really look good, but that is not what are in search of at this point.

lulu3202 commented 1 month ago

@ariG23498 , I just opened a new issue https://github.com/huggingface/huggingface-llama-recipes/issues/66 - This is with regard to building a gradio demo - Thanks!

AhmedIssa11 commented 1 month ago

Hello @ariG23498 , what do you think about create a guide in how to fine tune Llama for domain-specific tasks?

Haleshot commented 1 month ago

πŸŽ‰ Open Call for Contributions to the LLaMA Recipes Repository

Looking forward to what we build together!

@arig23498 I've opened a new issue for implementing ORPO fine-tuning for Llama 3 using Marimo notebooks. Additionally, I noticed the repository doesn't have issue/PR templates - would you like help setting those up to streamline future contributions?

Related: #82