-
Firstly, I would like to express my appreciation for the outstanding work you have been doing.
I have a couple of inquiries pertaining to the models and the fine-tuning process within the script:
…
-
I see good amount of focus on understanding how to perform full training of Mamba, but what about PEFT? Adapters/LoRA finetuning.
The base models are in fact "Ready" for a fine tune, however due to…
-
作者您好,
祝贺你们的工作被Findings of ACL 2024接受!
这篇工作的数据集准备部分给了我很大启发,我在自己合成instruction ft datasets的时候发现有部分步骤不太完整:
1. 在2.1 Graph Caption Generation 部分的 group (1) **Wikipedia + Wikidata5M** 中,我加载的是了wiki5…
-
Hello, thank you for a great library!
In https://github.com/RUCAIBox/RecBole/issues/1854 I asked about opportunity to conduct finetuning procedure. The answer was that it is possible to finetune mo…
-
Hi @gabrieltseng, I've read your paper and find it a really interesting work!
Thanks a lot for sharing your code as well!
I'm trying to adapt your downstream task [notebook](https://github.com/na…
-
Hi, thanks for sharing this wonderful work. Since you use the multi-frame multi-view inputs during pretraining stage, I want to know whether did you still use the temporal multi-frame inputs during fi…
-
Hey,
Great to see LISA implemented here.
As for the background, I am trying to finetune models with LORA other techniques on domain data but the Task i am doing is Causal LM is Next word Predict…
-
Hello everyone, below is my code for fine-tuning XTTS for a new language. It works well in my case with over 100 hours of audio.
https://github.com/nguyenhoanganh2002/XTTSv2-Finetuning-for-New-Lang…
-
How to finetuning the model (with LoRa)? The pytorch_lightning pipeline is hard to understand and modify. Can you give some API or pipeline?
-
### Question
I have successfully done the pretrain stage, while for fintuning, i encounter following issues.
```
(llava2) wangyh@A16:/data/wangyh/mllms/LLaVA$ bash finetune2.sh
[2023-08-12 15:3…