-
Hey,
Great to see LISA implemented here.
As for the background, I am trying to finetune models with LORA other techniques on domain data but the Task i am doing is Causal LM is Next word Predict…
-
I know this is a path problem, and I have modified the code to know the parent directory, but I still can't find the package, step1_supervised_finetuning and utils are at the same level, how do you so…
-
Hi,
Did you try using multi-GPU for training?
I am hoping to finetune your model with 1/10 of the learning rate.
Wondering what is the change needed if using 8 GPU.
-
### Feature description
For finetuning existing text generation model, LORA and QLORA are popularly being used. Can we create pipelines for download models from Huggingface, then finetune the m…
-
### Question
I have successfully done the pretrain stage, while for fintuning, i encounter following issues.
```
(llava2) wangyh@A16:/data/wangyh/mllms/LLaVA$ bash finetune2.sh
[2023-08-12 15:3…
-
作者您好,
祝贺你们的工作被Findings of ACL 2024接受!
这篇工作的数据集准备部分给了我很大启发,我在自己合成instruction ft datasets的时候发现有部分步骤不太完整:
1. 在2.1 Graph Caption Generation 部分的 group (1) **Wikipedia + Wikidata5M** 中,我加载的是了wiki5…
-
I've got some issues with AI Project.
It's about Model FineTuning
ghost updated
1 month ago
-
I am trying to finetune llama3.2 Vision Instruct, and I am using the distributed recipe and example (lora) config as a starting point. Eventually, I am looking to use a custom dataset, but first, I am…
-
Hi,
Can you upload the retrieval fine-tuning model? Thank you!
-
**Research question**
Does BERT finetuning increase performance?
**Hypothesis**
Yes, the classifier in BERT will also pull apart the word embeddings belonging to specific locations, making it eas…