-
First, would like to thanks ostris for this amazing tool, after tried Kohya and Simpletuner, ai-tool kit give me better results with a great ease. Would like to know if they're is a plan for create a …
-
In the dreambooth community, it has been empirically shown that extracting a LoRA results in better performance than directly training a LoRA. Enabling a full dreambooth finetune of SDXL would not onl…
-
I have a device containing 4 Nvidia L40 GPUs. I am trying to use the full_finetune_distributed llama3_1/8B_full recipe. My configuration for dataset in the config file is given below:
dataset:
_c…
-
Hi! I wonder whether unsloth will support some kind of CPU offload?
For example, I would like to finetune a 7-8B model on 24GB gpu. Since LoRA usually results in reduced performance, it would be gr…
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…
-
Pytorch version too old for fused optimizer
```
llm-full-mp-gpus.0 [stderr] [rank0]: Traceback (most recent call last):
llm-full-mp-gpus.0 [stderr] [rank0]: File "/homes/delaunap/milabench/benc…
-
Firstly thank you so much for developing and publishing this model and code. Much appreciated!
The downscaling inference example is using a model that was finetuned on the MERRA2 dataset to downsca…
-
As in the title.. I spent a bit of time debugging it but haven't figured out the cause yet. E.g. running
```
tune run --nproc_per_node 2 full_finetune_distributed --config llama2/7B_full fsdp_cpu_…
-
## TODOs
- [ ] Fix the speaker embedding finetuning code https://github.com/lenML/ChatTTS-Forge/blob/318d33f8d0b1451a39b3cbc94debca7f4f21dfca/modules/finetune/train_speaker.py#L15-L26
- [ ] Use the …
-
During the use of LoRA fine-tuning, everything was normal, but the following issue arose during full-scale fine-tuning.
I use the following script for full fine-tuning :
```shell
#!/bin/bash
N…