-
This is a note-to-self, so this doesn't get forgotten...
We still need to include the radios (CORR_RADIO_EXT and CORR_RADIO_LORA) in the corrections priority handling.
We discussed that the best…
-
I've tried both the LORA and Canny workflows, and I'm getting held up bc the new nodes aren't looking in the proper places for them. Should be looking where all other nodes look for them.
![SNAG-07…
-
### System Info
Installed packages:
```
accelerate==0.34.0
asttokens==2.4.1
certifi==2024.8.30
charset-normalizer==3.3.2
comm==0.2.2
compel==2.0.3
debugpy==1.8.5
decorator==5.1.1
diffuse…
-
```Error occurred when executing FluxLoraLoader:
Error(s) in loading state_dict for DoubleStreamBlockLoraProcessor:
Missing key(s) in state_dict: "qkv_lora1.down.weight", "qkv_lora1.up.weight", "p…
-
hello,I'm coming. Sorry to bother you againThis time, I would like to ask how to set the blocks parameters when merge 2 flux models. i know a flux model have 19 double_blocks,38 single_blocks,and base…
-
### Anything you want to discuss about vllm.
I've fine-tuned Qwen2.5-14B-Instruct using QLora(bitsandbytes 4bit) and also a full fine-tune. However when I tried to use it with a quantized model (Qw…
-
Hi,
I noticed these nodes under Experimental. Do you have a quick explanation as to how they're meant to be used? Thanks.
-
Error CODE 1 :
```
[2024-09-18 00:12:07] [INFO] 2024-09-18 00:12:07 WARNING cache_latents_to_disk is train_util.py:3936
[2024-09-18 00:12:07] [INFO] enabled, so cache_latents is
[2024-…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
I think bypassing is a feature so imp…
-
### System Info
Dear authors,
I have a question regarding the training time utilizing the peft package. I tried using LoRA with a swin transformer to reduce the parameter size.
```
model = Swi…