-
does brushnet support lcm-lora?
follow step in :
https://huggingface.co/docs/diffusers/main/en/using-diffusers/inference_with_lcm_lora
I found the effects is not very good
-
I don't know whats is happening, when trying to train it raises the following error, and a few seconds later, says training was concluded (of course, there is no lora trained and it failed).
```
[…
-
### Describe the feature
Hi, when training big model like llama2-70b with lora, it will run into oom due to the unsharded model.
It could help a lot if lora supported with `GeminiPlugin` or `Hybri…
-
Is this tool specifically designed for flux?
-
I recently fine-tuned the Qwen2-VL 7B Instruct model using LoRA, with the USE_HF=1 environment variable set during fine-tuning. However, I am unable to find a way to merge the fine-tuned model and exp…
-
i want to train the model with my dataset. however, when i run the finetune_lora.sh.
it shows file /videollama2/model/language_model/videollama2_llama.py, line22
from transformers import AutoConfig…
-
Thank you for the great work!!
In the HF, there are several models with different setting like normalcfg vs. smallcfg, 2step to 16 step.
I'm wondering what was the training params for those model…
yuwon updated
3 months ago
-
严格按照1.1环境配置来配置环境
首先遇到TypeError,随后更新了transformers包
![微信图片_20240926103518](https://github.com/user-attachments/assets/e4615141-5aea-4799-8394-a7de5508c84b)
然后执行命令python examples/generate_lora.py --ba…
xj120 updated
1 month ago
-
Follow the finetuning commands,I tried setting max_image_size to 512, but still oom on 40g ,how much vram is needed to finetune? Are there any parameters can reduce the vram? thank you
![捕获oom](htt…
-
Congrats and thank you again for a project that changes everything. Can't use anything else and now I even prefer your Web UI to the std. text-web-ui...
In some instances it would be super-useful t…