-
I am trying to finetune llama3.2 Vision Instruct, and I am using the distributed recipe and example (lora) config as a starting point. Eventually, I am looking to use a custom dataset, but first, I am…
-
Hey,
Great to see LISA implemented here.
As for the background, I am trying to finetune models with LORA other techniques on domain data but the Task i am doing is Causal LM is Next word Predict…
-
How to finetuning the model (with LoRa)? The pytorch_lightning pipeline is hard to understand and modify. Can you give some API or pipeline?
-
Hi,
Did you try using multi-GPU for training?
I am hoping to finetune your model with 1/10 of the learning rate.
Wondering what is the change needed if using 8 GPU.
-
作者您好,
祝贺你们的工作被Findings of ACL 2024接受!
这篇工作的数据集准备部分给了我很大启发,我在自己合成instruction ft datasets的时候发现有部分步骤不太完整:
1. 在2.1 Graph Caption Generation 部分的 group (1) **Wikipedia + Wikidata5M** 中,我加载的是了wiki5…
-
First, would like to thanks ostris for this amazing tool, after tried Kohya and Simpletuner, ai-tool kit give me better results with a great ease. Would like to know if they're is a plan for create a …
-
### Question
I have successfully done the pretrain stage, while for fintuning, i encounter following issues.
```
(llava2) wangyh@A16:/data/wangyh/mllms/LLaVA$ bash finetune2.sh
[2023-08-12 15:3…
-
Hi @gabrieltseng, I've read your paper and find it a really interesting work!
Thanks a lot for sharing your code as well!
I'm trying to adapt your downstream task [notebook](https://github.com/na…
-
### Describe the issue
**Issue:**
I ran into tokenization mismatch errors when I tried to fine-tune from Llama-3.1. I pre-trained a new MLP adapter for Llama-3.1 and that seems to work, but the fine…
-
**Research question**
Does BERT finetuning increase performance?
**Hypothesis**
Yes, the classifier in BERT will also pull apart the word embeddings belonging to specific locations, making it eas…