-
First, would like to thanks ostris for this amazing tool, after tried Kohya and Simpletuner, ai-tool kit give me better results with a great ease. Would like to know if they're is a plan for create a …
-
作者您好,
祝贺你们的工作被Findings of ACL 2024接受!
这篇工作的数据集准备部分给了我很大启发,我在自己合成instruction ft datasets的时候发现有部分步骤不太完整:
1. 在2.1 Graph Caption Generation 部分的 group (1) **Wikipedia + Wikidata5M** 中,我加载的是了wiki5…
-
Hi,
Is there any option or improvement on finetuning LASER for specific areas?
Thank you!
0x01h updated
5 years ago
-
Hi,
Thanks for this repo!
I am trying to finetune VGGNet using the code given in the example. I tried the image_preloader on my dataset, but it threw errors while loading a few files (which PIL …
-
How to finetuning the model (with LoRa)? The pytorch_lightning pipeline is hard to understand and modify. Can you give some API or pipeline?
-
# I am planning to perform a two-stage fine-tuning process and need some guidance on how to proceed.
## First Stage
1. Load Base Model: I start by loading the base model, qwen1.5 32B.
2. Appl…
-
Hi @gabrieltseng, I've read your paper and find it a really interesting work!
Thanks a lot for sharing your code as well!
I'm trying to adapt your downstream task [notebook](https://github.com/na…
-
Hi,
Did you try using multi-GPU for training?
I am hoping to finetune your model with 1/10 of the learning rate.
Wondering what is the change needed if using 8 GPU.
-
### Question
I have successfully done the pretrain stage, while for fintuning, i encounter following issues.
```
(llava2) wangyh@A16:/data/wangyh/mllms/LLaVA$ bash finetune2.sh
[2023-08-12 15:3…
-
### Describe the issue
**Issue:**
I ran into tokenization mismatch errors when I tried to fine-tune from Llama-3.1. I pre-trained a new MLP adapter for Llama-3.1 and that seems to work, but the fine…