-
same sitting yesterday is ok
but today always show loss: nan
![image](https://github.com/user-attachments/assets/dddc65a5-0274-47ff-9373-795a364fc521)
-
### 🚀 The feature, motivation and pitch
Hello,
I was delighted to see the implementation of the multi LoRa feature and would like to express my gratitude and appreciation for your efforts. However…
-
Hi everyone,
I trained a Lora, now I enhanced my dataset and I would like to fine tune my trained Lora..
1. Do you know how to do it? Are there instructions for this?
I tried to change model …
-
Hello, here is the console logs
```
12:21:36-202423 INFO Kohya_ss GUI version: v24.2.0
12:21:37-517593 INFO Submodule initialized and updated.
12:21:37-526597 INFO nVidia toolkit de…
-
How can i change additional-network custom lora paths to my default automatic1111 sd default lora path?
-
### System Info
trl official DPO examples. Finetune llama3.1 with lora.
params:
lora_rank: 32
lora_target: all
pref_beta: 0.2
pref_loss: sigmoid
### dataset
dataset: train_data
template:…
-
HI, thanx for excelent solution and resolution!
Plase give more information about training LORA:
1) How much VRAM I need?
2) How long it takes to train? epochs, how many pics I need?
3) What resol…
-
我注意到response里面添加了,但是在input_ids中同样添加了[tokenizer.pad_token_id],这两个是不是添加重复了呢?
def process_func(example):
MAX_LENGTH = 384 # Llama分词器会将一个中文字切分为多个token,因此需要放开一些最大长度,保证数据的完整性
input_ids, atte…
-
### Expected Behavior
Get this error when running Lora - Not sure if its something Im doing wrong or does not seem to like Float8_e4m3fn or running FP8?
Lora - https://civitai.com/models/30409…
-
I am training task lora on "liuhaotian/llava-v1.5-13b" by following the same code in LLaVA repo:
https://github.com/haotian-liu/LLaVA/blob/main/scripts/v1_5/finetune_task_lora.sh
The above runs fi…