-
As the title says, Loras trained with Diffusers' scripts can't be loaded with stock Lora Loader
![imagen](https://github.com/comfyanonymous/ComfyUI/assets/23042093/abb6379d-0b46-47ae-8fa5-0a7342fa1…
-
### Question
```python
def get_peft_state_maybe_zero_3(named_params, bias):
if bias == "none": # no bias mode, only returns lora weights
to_return = {k: t for k, t in named_params if …
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…
DMR92 updated
2 weeks ago
-
I hope this message finds you well. I wanted to take a moment to express my gratitude for providing the "inpatienting" feature. Over the past couple of days, I've been thoroughly exploring its capabil…
-
I had an old version of Kohya untill recently (I never updated since it worked fine for what I need)
Had to reinstall windows and reinstall Kohya and I can now not merge LoRA's anymore.
I know ver…
-
When attempting to merge LoRA weights into the TinyLLaVA-Gemma-SigLIP-2.4B model, I encountered a RuntimeError due to a missing key lm_head.weight in the GemmaForCausalLM state_dict. The specific erro…
-
Hey,
Still using and loving your extension!
quick question - on a1111 (yeah yeah, i need to move to comfy, i know) - I sometimes will add a lora in the suffix, and later notice that the suffix wasn'…
-
From the comments in Lora_distributed_finetuning, batch size = num_accumulated_gradients * batch_size * nproc_per_node...
But shouldn't it be just batch_size * num_accumulated_gradients?
-
### 🚀 The feature, motivation and pitch
Currently lora doesn't work with chunked prefill because some of lora index logic doesn't cover the case where sampling is not required. This also means lora i…
-
As pointed out by @janeyx99, our `quantize_base` argument will only quantize the base model weights of linear layers with LoRA applied to them (see e.g. [here](https://github.com/pytorch/torchtune/blo…