-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
…
-
Similar to [this](https://github.com/ai-dock/kohya_ss/issues/3#issuecomment-2351584426) open issue, I'd be very interested in a ComfyUI + Kohya_ss container. I would like to train LoRAs and use them r…
-
**Is your feature request related to a problem? Please describe.**
Would be great to be able to load a LoRA to a model compiled with `torch.compile`
**Describe the solution you'd like.**
Do `load…
-
enable buckets is set to false in the script, can we please get a checkbox option for this.
-
### Describe the bug
When using compel and prompt embeddings, and performing inference with LoRA weights loaded, `lora_scale` doesn't work as expected. Specifically, if I do the following actions i…
-
-
Hi guys! I test TCD lora but found that Euler's method combined with TCD achieve better results than the TCD sampler. I don't know why and this is normal?
-
### 🚀 The feature, motivation and pitch
Currently vLLM only supports LoRA adapters on nvidia gpus with compute capability >= 8.0. This request is to support >= 7.5.
The limitation here is that vLL…
-
### Your current environment
Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubunt…
-
When I load a model trained with CPT with ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head" ] layers for fine-tuning, the "embed_tokens" and "lm_h…