-
# Description
Supports for PEFT for chatllama models and trainings
# TODO
- [x] Add PEFT to Enable Parameter efficient fine-tuning in actor, reward and critic models.
- [ ] Check RLHF stabil…
-
To run LLaMA 3.1 (or similar large language models) locally, you need specific hardware requirements, especially for storage and other resources. Here's a breakdown of what you typically need:
### …
-
### 🚀 The feature, motivation and pitch
Fuyou Training Framework Integration for PyTorch
Description:
Integrate the Fuyou training framework into PyTorch to enable efficient fine-tuning of larg…
-
### Describe the feature
I want to continue pre-training llama 2 70b using my own data. My data is about 1b tokens. I have read [Fine-tuning Llama 2 70B using PyTorch FSDP ](https://huggingface.co/bl…
-
## Problem statement
1. Despite the impressive capabilities of large scale language models, the potential to modalities has not been fully demonstrated other than text.
2. Aligning parameters of vi…
-
Add support for PEFT models
## Description
Currently, only models corresponding to the `PreTrainedModel` instance are supported. It would be useful to add support for models using Parameter-Effi…
-
## 개요
- LLM.int8() + LoRA를 활용한 memory¶meter efficient fine tuning
- BitsAndBytes + Peft 활용한 모델 학습 예정
- Backbone은 [polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) 활용(KoGPT는…
-
### Description
We have received exclusive data for speech-to-text (STT) from a specific speaker. The task is to fine-tune the model using both this speaker's training data and the base training data…
-
Hi @baifanxxx :
I'm encountering an issue where the forward pass of the `SegVol` class hangs when the `image` is passed to `image_encoder`, resulting in NCCL communication timeouts in finetuning with…
-
i have done the following changes:
1. https://github.com/WZH0120/SAM2-UNet/blob/eb1c38d870358cbdd769c9721062f7bb888ef9b5/train.py#L15
2. edit the yaml https://github.com/WZH0120/SAM2-UNet/blob/eb1c3…