-
> [rank0]: Traceback (most recent call last):
> [rank0]: File "/opt/tmp/nlp/wzh/LLM-Dojo/rlhf/rloo_train.py", line 167, in
> [rank0]: trainer.train()
> [rank0]: File "/home/nlp/miniconda3/…
-
### 🐛 Bug
Today when attempting to upload a LoRA-trained Llama 3.1 70B model (first time I've trained Llama 3.1), I hit the following during the eLoRA merge. Note I used the `cpu_shard` method to u…
-
-
When starting `finetune_qlora.sh` and using `transformers==4.34.0`, it crashes with the error
> TypeError: forward() got an unexpected keyword argument 'padding_mask'
```
bitsandbytes …
-
如下是一段在https://github.com/lvwerra/trl/blob/fc468e0f3582de1aacd071fceb24265c619a8ef5/examples/stack_llama/scripts/merge_peft_adapter.py中截取的代码
# Load the Lora model
model = PeftModel.from_pretrained(mo…
-
WSL2 Ubuntu, new install, I get the following error after it downloads the weights and tries to train.
Sorry I can't give more details, but I'm really not sure what's going on.
Number of samples…
-
Assuming Lambda Labs 8xA100 and 80gb, which is 12 bucks. Can get a reasonable $ estimate that way.
-
### Description
When trying to run Unsloth fine-tuning script, encountering a Triton compilation error related to ReduceOpToLLVM.cpp.
### Error Message
```
python /data/ephemeral/home/unsloth_ex…
-
Is there a way to finetune using V2? Thanks.
-
Same here. I was pretraining LlaMA-3.1-7B-Instruct done, and then continue to finetuning w/ QLoRA normally. After 2 epochs, I switched to use Unsloth to continue the finetuning with longer context (80…