-
**Describe the bug**
4xA100 gpu fine-tuning llama-3.1-8b-instruct (also tried llama2-13b-ms, same error), cli
```
CUDA_VISIBLE_DEVICES=0,1,2,3 \
NPROC_PER_NODE=4 \
LOCAL_WORLD_SIZE=4 \
swift…
-
How can i finetune in kaggle or colab
-
I’m trying to fine-tune Phi 3.5 Vision using transformers. However, I’m running into an issue trying to save the model during or after training. See below for a minimal reproducible example.
My examp…
-
```
hayden@XPS15:/mnt/d/phi3-finetuning/phi3-finetuning/terminal-assistant/setup$ conda activate phi-3-env
(phi-3-env) hayden@XPS15:/mnt/d/phi3-finetuning/phi3-finetuning/terminal-assistant/setup$ c…
-
During the finetuning process, I noticed that the old version of v2 model might produce some grid-like artifacts, whereas the updated v2 version does not have these artifacts. Could you please explain…
-
Since it appears that Flux LoRA training can still be effective when only training specific layers, I am wondering if this functionality can be expanded to Finetuning, since this is where the biggest …
-
Thank you very much for your outstanding work!
I have a question that I haven't quite understood. When fine-tuning your RS5M model on RSICD or RSITMD using the methods outlined in the paper (infoNCE,…
-
The finetuning for MACE potentials yields very different results in MACE 0.3.7 vs 0.3.6. I'm not yet sure why
-
Great work! This is really impressive. Is there any chance to release training or fine-tuning (LoRA) code? Appreciate it if possible, thanks!
-
Follow the finetuning commands,I tried setting max_image_size to 512, but still oom on 40g ,how much vram is needed to finetune? Are there any parameters can reduce the vram? thank you
![捕获oom](htt…