-
I have trained LoRA several times with faces but it seems the model can't learn anything.
My dataset is as follow:
```
data.tar
|- 0000.jpg
|- 0000.txt ("a photo of a woman [ohwx]")
|- …
-
### Proposal to improve performance
The current execution flow with prefix caching is as follows:
1. Scheduler takes the next prefill sequence:
a. Calculate how many blocks it needs.
b. …
-
### Describe the bug
When train_dreambooth_lora_flux attempts to generate images during validation, `RuntimeError: Input type (float) and bias type (c10::BFloat16) should be the same` is thrown
### …
-
### ⚠️ Please check that this feature request hasn't been suggested before.
- [X] I searched previous [Ideas in Discussions](https://github.com/OpenAccess-AI-Collective/axolotl/discussions/categories…
-
Can this project run on non-Apple chips? My environment is NVIDIA's A800 because I saw that the mlx library used by the project is designed for Apple chips and systems.
I can get the moe model file…
-
code: result_dora = (mag_norm_scale - 1) * (F.linear(x, transpose(weight, self.fan_in_fan_out)) ) + mag_norm_scale * lora_B(lora_A(x)) * scaling
Question: what is the effect of (mag_norm_scale - 1) …
-
**Is your feature request related to a problem? Please describe.**
The current implementation of the dreambooth trainings for both loras and finetuning is very memory intensive.
**Describe the sol…
-
Can you resume training? I noticed there options to save state, but not sure how to go about resuming the training.
-
So I have a GPTQ llama model I downloaded (from TheBloke), and it's already 4 bit quantized. I have to pass in False for the load_in_4bit parameter of:
```
model, tokenizer = FastLlamaModel.from_pr…
-
thanks for your awesome work!
I was wondering if you got any results on vision models like vit or stable diffusion?